Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp1713077pxj; Wed, 19 May 2021 12:07:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwYoY6f1FuvoRnlTlwgT++UYbm+QpnCOoSCLU3vuDCYWGnVHKBnIm/gLyRtekykLQMwVLY0 X-Received: by 2002:a17:906:1617:: with SMTP id m23mr689155ejd.352.1621451259710; Wed, 19 May 2021 12:07:39 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1621451259; cv=pass; d=google.com; s=arc-20160816; b=ZjeQiuKoGJ4f5M6ds6D0V3FQQfG++5VnwxQhOwBpMBCAa/v5fnwcrbHHp8INhgu/6G XnBvnc4ZUySdJaAWgPwOSb0YJirHgYP5g0UZ62z3W0tEYeP7cquyFHuCSeQlq/Lzj89g 861b3RL81pH0rF0Ssq/iifsBIKuUewxut3vRwAJAc7ZWQmOwRvMrnVfM/gd0r2BoysP5 2QKPl4zwGquiBGu8xvcwHDgbcl74TcVn0Qz/UzSEzbZvou+OAcaukuwys751NdMnM/kc mNg0CoAfWqfppTKCs1wipGfRkn58pX4JgsFuLS0qZGNMnWTDSGp2WJ+jqDbHzn3u8Sc4 6Whg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:wdcipoutbound:mime-version :content-transfer-encoding:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:ironport-sdr:dkim-signature; bh=KWwKrZea1O1wNmb7sURWpjLWgYK7o8TusBtqqoYS7Bw=; b=KScXnvwY0KhMWgcxdFNE5hw/fgXZHGt7SGXbEPdhFYj9U9LZ+07A3x85BRIxnLJ2x+ cYJBMfbOmQWDvIxPOPbZs/3/B1pem79dwSF/3HOfRRFamBFLyJV5utOXNhgWRiUKblIV 2GaCiamZda2MNcwPo91KRwdz81+qnMR13dG4IbEHaPVbFejT66y51mkrc1zo1OiPi5gJ ONV0+POz+gCL+JlK/e7QWSrlB8W9qLtUgMM5cWX2DRYEBQw91HPVb51yIW2byfSfS83z L2stGrcq95Q7iGTrAPspf14cr+nqP3fwGn7T9HwdFvFCnwrS87faxw0gtBF8wIRGGkch DSrw== ARC-Authentication-Results: i=2; mx.google.com; dkim=fail header.i=@wdc.com header.s=dkim.wdc.com header.b=laEe8i16; dkim=pass header.i=@sharedspace.onmicrosoft.com header.s=selector2-sharedspace-onmicrosoft-com header.b=CH8IUV04; arc=pass (i=1 spf=pass spfdomain=wdc.com dkim=pass dkdomain=wdc.com dmarc=pass fromdomain=wdc.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=wdc.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id lv19si478063ejb.257.2021.05.19.12.07.15; Wed, 19 May 2021 12:07:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@wdc.com header.s=dkim.wdc.com header.b=laEe8i16; dkim=pass header.i=@sharedspace.onmicrosoft.com header.s=selector2-sharedspace-onmicrosoft-com header.b=CH8IUV04; arc=pass (i=1 spf=pass spfdomain=wdc.com dkim=pass dkdomain=wdc.com dmarc=pass fromdomain=wdc.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=wdc.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238587AbhESDhu (ORCPT + 99 others); Tue, 18 May 2021 23:37:50 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:13878 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238157AbhESDhs (ORCPT ); Tue, 18 May 2021 23:37:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621395389; x=1652931389; h=from:to:cc:subject:date:message-id:in-reply-to: references:content-transfer-encoding:mime-version; bh=9rHYAsNKtstOK9+Yf1PjZJp04ECAoXFA9BkQ7vM+Low=; b=laEe8i16nbRTCPSqU+pHHLkQdTZpT+DX7e6BEq9Pvsftuf7hMAKXmA/O Ivi7Ukk0IltJ0MySzw0N9Sk2tYOYIksTeokx4VqgFaqNswa6USZZZjKbC YY/ga068/LvVBXRLClysyqGc17U13ESPWGpyWqB1oQ6PEcpU9hhXC/zwJ hIg610mBF2jsxgWib3CyGOvog1QfzUWLJ0d3MijLvUGyd0ZsfrUItqetD HsPJU7g+m7LyImhkDikBZmSmR4h0hYUFoh1Z+JicJpEnAPYM6nj7QADlC 8l87lkt1PspP1PDYylEugF+CgLLAfDJpXVINVpc27kVnwoQbzpmoSdaod w==; IronPort-SDR: kseP4D9cnVstHmDgs9k1De6N8qBWR+wjLaRj44etLTkB1JSTGGDUJRyYKHbVfO6zEUdhGGm6E5 RHVoXvSsN3It/SBtSCD/WX2WtPMaZoMLus5zuSy3eJThKDc748Jo0dpFdi2POXVlhsc7MsRU0z 2IV8IzeDD9bXVpPibI9M5pt2vYunOlq0Km+1cISxRaYu85UVr9d7vJMq1bNDPLv0nd4tYNdhzl k/liQfwE+kHmSl11WegGl1jICwIvZBGXeEgE4aRE9sFguSeNDYZUVGTjIrfWumoh6sYcTel6nj 4p8= X-IronPort-AV: E=Sophos;i="5.82,311,1613404800"; d="scan'208";a="272597130" Received: from mail-co1nam11lp2176.outbound.protection.outlook.com (HELO NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.176]) by ob1.hgst.iphmx.com with ESMTP; 19 May 2021 11:36:30 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BjQAKg3aqKBCOh06I2o714M0iVQHTMtk2qI4U9vwg8YEwmRcAOb/OtezIN/wIeUorm1UFe/NnPQRTNyjM34qXoBKJKupiBVZXBTgEvJFSi4R/IqYsG5887dP6pV9NValFgNLCGXJNa669VNYtSB1yplymz0WFO4xa/E3RI8Fwq37ZiR150UWijbwvgdN0X/cf+RRdAMKXAYUO3E5tv51qVUoZMlQrQn9nRF8L9yREN7Ld9U+tJ1rtKFIXInplVb5R0JUO9QAPL6tQrYOhCig95Lzs0VYXY/MWZ8ptrpSUJDdviMW2gVY7+tSp2aWRN1e75CSdSBcImED8/fnld6TZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KWwKrZea1O1wNmb7sURWpjLWgYK7o8TusBtqqoYS7Bw=; b=mw7+/YftqdlYVVMd7ajdv8ecilssT4cN7jaAfbsvaxJfJRbnBAEz2UNe7mk21m3vbeak7tIy4nABhW61K/i96ey4aVINvDc3TK4+aYIuaEJQH7gEb818oY2gIFD8aB9eh1ib8dS4i1wMEk1a3iKobxIIyTnzcFMo8hzScCi1pqAp+VYV5yvqh7rwRntOrS6gSFE/DTWVHNqp4Vqdz9jpQ6hIJr1Cat2l5cF8fc+DrFwOOfx8oL9yhtaifk8aNCJ4m/q3c6vJxcJA3tB9Dz0siARGifw3D3iWk4ymWtppPkmDMrVd5zCVDvGH9Xl+e5FaMwsLeleUugwGMFxlk0UVvw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass header.d=wdc.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KWwKrZea1O1wNmb7sURWpjLWgYK7o8TusBtqqoYS7Bw=; b=CH8IUV04VE5dgrMWV4jm27X/wZjMTC8zp6Dgj5GLaBHujqHXoyMVvq74yvrdOi63DIhDLzA5a2hW2oU6eojlmNAn+g6gX5KTfzqG4fhZPG5Rj4a8xsd9JS783ClNtS+H25Wbwj0XeQxbEtWOsLHtL0TBji0k2WFRhl1cDvuHidY= Authentication-Results: dabbelt.com; dkim=none (message not signed) header.d=none;dabbelt.com; dmarc=none action=none header.from=wdc.com; Received: from CO6PR04MB7812.namprd04.prod.outlook.com (2603:10b6:303:138::6) by CO6PR04MB7761.namprd04.prod.outlook.com (2603:10b6:5:35f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Wed, 19 May 2021 03:36:27 +0000 Received: from CO6PR04MB7812.namprd04.prod.outlook.com ([fe80::88a0:bf18:b01d:1a50]) by CO6PR04MB7812.namprd04.prod.outlook.com ([fe80::88a0:bf18:b01d:1a50%4]) with mapi id 15.20.4129.033; Wed, 19 May 2021 03:36:27 +0000 From: Anup Patel To: Palmer Dabbelt , Palmer Dabbelt , Paul Walmsley , Albert Ou , Paolo Bonzini , Jonathan Corbet , Greg Kroah-Hartman Cc: Alexander Graf , Atish Patra , Alistair Francis , Damien Le Moal , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-staging@lists.linux.dev, Anup Patel Subject: [PATCH v18 02/18] RISC-V: Add initial skeletal KVM support Date: Wed, 19 May 2021 09:05:37 +0530 Message-Id: <20210519033553.1110536-3-anup.patel@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210519033553.1110536-1-anup.patel@wdc.com> References: <20210519033553.1110536-1-anup.patel@wdc.com> Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [122.179.32.148] X-ClientProxiedBy: MA1PR0101CA0057.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:20::19) To CO6PR04MB7812.namprd04.prod.outlook.com (2603:10b6:303:138::6) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from wdc.com (122.179.32.148) by MA1PR0101CA0057.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:20::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25 via Frontend Transport; Wed, 19 May 2021 03:36:22 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cccb6109-47b3-4e4d-6645-08d91a77488b X-MS-TrafficTypeDiagnostic: CO6PR04MB7761: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: WDCIPOUTBOUND: EOP-TRUE X-MS-Oob-TLC-OOBClassifiers: OLM:214; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: SwKGXW67Tgv+GwprBIw61Uvo7CyBY6jJtauM9sBpxynkAOtTqGG0CtDL4IIlfaXEVH6IbLpeyUCMX2Zk5WLFZUUKSxkRxL3zwEKRiR8Gr52yUmIooLfvB7uTMjb95OoK8E52qVmddd1DZZZW0fei9jREHG4LiUI8+chGgI5q2B2amiGkyFlzWssM3UVGez2u/fqaJJiN0H9u5TJtcx49gfsqRr2flQq1yB+IAbsiZP/mA6f8XfMV7AArkZz0q07kql0uA6Jjvhnnb6kna6tQ+MyC3g5wjiDvcJQyi0ZEpYga/SMvgqk+mHv3mw3DgSSH4dodQKID0hDlIfM6KoaXr4wR2XYV0DiiEWAaLsHqmIiA01iwaXbYtnOwkAJL/EW8+p2HjzUmrVfzx92YdYba/XYHp25n96gYdNhaU1X42dM4nvoGy8LChD8kka4tE5972i8LWF6QpWVLPDZkRxHiw9VhNPJQc/ym9WYef3o7NtlZ1P+qV100zsLL7OCLXjDlZaN2PQ5Hc7qcsjadmPFDbndH3z2TsjFodRrWqBHv7rXFaTxw/wTHy4xHoW7I5zfMLmwkvLkf0w4HiXZm3VWj7m/1U/qUySf4BTNaviEp+FamVh8yIemheMMPrY1YrsmAXve5v/sLvCptP4kIQsbXuw== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CO6PR04MB7812.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(136003)(376002)(39860400002)(346002)(316002)(38350700002)(55016002)(38100700002)(66476007)(66946007)(1076003)(2616005)(186003)(5660300002)(4326008)(26005)(8936002)(478600001)(86362001)(2906002)(52116002)(7696005)(16526019)(110136005)(83380400001)(8676002)(6666004)(66556008)(36756003)(30864003)(7416002)(956004)(54906003)(44832011)(8886007);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData: =?us-ascii?Q?f8EIXjAekpRiguPxpCUk19XqCfcTo7XqU+Cmm/eg+Mqy7BqJaoE3OlCkjgPI?= =?us-ascii?Q?W8Ip9482gf3FeFCeNYFR2f1wZnW+dQIOoLnwJ50IOMP+AoJkyZIP1WsPsZEJ?= =?us-ascii?Q?uXXWbTxQJPZ176scYNZgh97d/AB6rlHaBLTbYFH0NZzy558f1akDnpvzjpSs?= =?us-ascii?Q?dVhmjtmLGnbI418tVlqPpzupazvQvppFO6zgzWIxTNrzfvzwpVazUt8jK+Gp?= =?us-ascii?Q?jfsgquP/38M4B3nCrBXLFubJXuA4P+ABlmKJXVAVkCOExMD+3h5WQSBABM2u?= =?us-ascii?Q?KYaGlomUzpjIupxyyClL46+jqtH7Z2gKTzYOotLH0xw4bW1aJZtIQsEQsdJ2?= =?us-ascii?Q?bnL3RnGphzwDOD0MiVj+7Z4PIC2CkHViY+ZVZLAM8Rvk25MyMvXZDHVDd5zY?= =?us-ascii?Q?YAKGhHTFMIvscYbFPkKv9mB7tNUgb8YW6WC+TnokaDC7LTpWiotRhhRKZEwu?= =?us-ascii?Q?Fcoq0wK6dFBBydcHwcw3vxdEgSzE2m8C7QmQK1uKZpR++7dYVdDMqaxrJc30?= =?us-ascii?Q?R6PkrcZKqYylUgsVm/TLndNYFM/3VeHAEIu0vjAQB9G/oOc7rTh3aCseEHsd?= =?us-ascii?Q?EFMlkf5KygjAJQKKIP9kfy/uT8x+R+BAGpKuZrhyfPyL07F6anRM9hRm94Ug?= =?us-ascii?Q?d4/HpYXozIJD/JszWPOG9lwXzhP4yJPdR73YOulymYVGOt6XMvTCwWvdPT0N?= =?us-ascii?Q?BOtRMkzgw4Mg3sfWKPc+xrqqsh6Tcf6+CjYqfxdgqIAfRxkwAkIenObRnc08?= =?us-ascii?Q?N4gYqRjz1rwQDlCIEINCNU6Eb+S0Eay4yafMtulXIvyKrDmMwG8Zf12Tzxk1?= =?us-ascii?Q?ARHofNWny0pYrenpM8k+v6hX72/MRcQXOwuHMv1GGgsO8Ykwi/CK/o6SAf43?= =?us-ascii?Q?MMUw32mfsvrVL4DBhKf5yF+mskm34J9qG2VuOA6Oa7svyIpiezQiMgtIOrv+?= =?us-ascii?Q?du2VCgQr8+9O4UVpcFXLUvwdfAlbYGrERmaQklB+rf7gFpudZQk8pPKq8S9s?= =?us-ascii?Q?rkKNgIyDm3PPOUcmvAb1CsWuDNR7glT11ueUlH4vfCn3WiNLeL6jogplDNgD?= =?us-ascii?Q?S4Cta/U/C/FYR5omhBrCaDeIKJz5781ePp+fQbylDg3jAOSSTahQgEDP0fIw?= =?us-ascii?Q?oa/RAAZMpARo3pH/zHjq9lexUs7CrY4gpVTgi6F2mpgsRrrl5B3YorOwJSGF?= =?us-ascii?Q?x5qkop9d6vvC771Hqxn1ExrYiJ4U9SH3FMDLsCmXgYD3cr67LBIF7GvvRLYc?= =?us-ascii?Q?HG48DPcaQTjAIQ/LCz04LSJc7zrt6oK+5MJ7poX0v0/aJoHXx8PhvhWb79Yt?= =?us-ascii?Q?CkIbRiXBiOqctsu6qElkL9sa?= X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: cccb6109-47b3-4e4d-6645-08d91a77488b X-MS-Exchange-CrossTenant-AuthSource: CO6PR04MB7812.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 03:36:27.1674 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: TBTrot7DYrv+cuyeymV7+f1CMxfymHamKhcYuq+Kswn/AemC3TQe3tAhL7wj/CegTbTYDJTbvobFzaWB92WeUQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR04MB7761 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch adds initial skeletal KVM RISC-V support which has: 1. A simple implementation of arch specific VM functions except kvm_vm_ioctl_get_dirty_log() which will implemeted in-future as part of stage2 page loging. 2. Stubs of required arch specific VCPU functions except kvm_arch_vcpu_ioctl_run() which is semi-complete and extended by subsequent patches. 3. Stubs for required arch specific stage2 MMU functions. Signed-off-by: Anup Patel Acked-by: Paolo Bonzini Reviewed-by: Paolo Bonzini Reviewed-by: Alexander Graf --- arch/riscv/Kconfig | 1 + arch/riscv/Makefile | 1 + arch/riscv/include/asm/kvm_host.h | 89 +++++++++ arch/riscv/include/asm/kvm_types.h | 7 + arch/riscv/include/uapi/asm/kvm.h | 47 +++++ arch/riscv/kvm/Kconfig | 33 +++ arch/riscv/kvm/Makefile | 13 ++ arch/riscv/kvm/main.c | 95 +++++++++ arch/riscv/kvm/mmu.c | 80 ++++++++ arch/riscv/kvm/vcpu.c | 311 +++++++++++++++++++++++++++++ arch/riscv/kvm/vcpu_exit.c | 35 ++++ arch/riscv/kvm/vm.c | 79 ++++++++ 12 files changed, 791 insertions(+) create mode 100644 arch/riscv/include/asm/kvm_host.h create mode 100644 arch/riscv/include/asm/kvm_types.h create mode 100644 arch/riscv/include/uapi/asm/kvm.h create mode 100644 arch/riscv/kvm/Kconfig create mode 100644 arch/riscv/kvm/Makefile create mode 100644 arch/riscv/kvm/main.c create mode 100644 arch/riscv/kvm/mmu.c create mode 100644 arch/riscv/kvm/vcpu.c create mode 100644 arch/riscv/kvm/vcpu_exit.c create mode 100644 arch/riscv/kvm/vm.c diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 195c6d319ab8..d0602ea394bc 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -555,4 +555,5 @@ source "kernel/power/Kconfig" endmenu +source "arch/riscv/kvm/Kconfig" source "drivers/firmware/Kconfig" diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile index 3eb9590a0775..05687d8b7b99 100644 --- a/arch/riscv/Makefile +++ b/arch/riscv/Makefile @@ -92,6 +92,7 @@ head-y := arch/riscv/kernel/head.o core-y += arch/riscv/ core-$(CONFIG_RISCV_ERRATA_ALTERNATIVE) += arch/riscv/errata/ +core-$(CONFIG_KVM) += arch/riscv/kvm/ libs-y += arch/riscv/lib/ libs-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h new file mode 100644 index 000000000000..2068475bd168 --- /dev/null +++ b/arch/riscv/include/asm/kvm_host.h @@ -0,0 +1,89 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#ifndef __RISCV_KVM_HOST_H__ +#define __RISCV_KVM_HOST_H__ + +#include +#include +#include + +#ifdef CONFIG_64BIT +#define KVM_MAX_VCPUS (1U << 16) +#else +#define KVM_MAX_VCPUS (1U << 9) +#endif + +#define KVM_HALT_POLL_NS_DEFAULT 500000 + +#define KVM_VCPU_MAX_FEATURES 0 + +#define KVM_REQ_SLEEP \ + KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) +#define KVM_REQ_VCPU_RESET KVM_ARCH_REQ(1) + +struct kvm_vm_stat { + ulong remote_tlb_flush; +}; + +struct kvm_vcpu_stat { + u64 halt_successful_poll; + u64 halt_attempted_poll; + u64 halt_poll_success_ns; + u64 halt_poll_fail_ns; + u64 halt_poll_invalid; + u64 halt_wakeup; + u64 ecall_exit_stat; + u64 wfi_exit_stat; + u64 mmio_exit_user; + u64 mmio_exit_kernel; + u64 exits; +}; + +struct kvm_arch_memory_slot { +}; + +struct kvm_arch { + /* stage2 page table */ + pgd_t *pgd; + phys_addr_t pgd_phys; +}; + +struct kvm_cpu_trap { + unsigned long sepc; + unsigned long scause; + unsigned long stval; + unsigned long htval; + unsigned long htinst; +}; + +struct kvm_vcpu_arch { + /* Don't run the VCPU (blocked) */ + bool pause; + + /* SRCU lock index for in-kernel run loop */ + int srcu_idx; +}; + +static inline void kvm_arch_hardware_unsetup(void) {} +static inline void kvm_arch_sync_events(struct kvm *kvm) {} +static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} +static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} + +void kvm_riscv_stage2_flush_cache(struct kvm_vcpu *vcpu); +int kvm_riscv_stage2_alloc_pgd(struct kvm *kvm); +void kvm_riscv_stage2_free_pgd(struct kvm *kvm); +void kvm_riscv_stage2_update_hgatp(struct kvm_vcpu *vcpu); + +int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run); +int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, + struct kvm_cpu_trap *trap); + +static inline void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch) {} + +#endif /* __RISCV_KVM_HOST_H__ */ diff --git a/arch/riscv/include/asm/kvm_types.h b/arch/riscv/include/asm/kvm_types.h new file mode 100644 index 000000000000..e476b404eb67 --- /dev/null +++ b/arch/riscv/include/asm/kvm_types.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_RISCV_KVM_TYPES_H +#define _ASM_RISCV_KVM_TYPES_H + +#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE 40 + +#endif /* _ASM_RISCV_KVM_TYPES_H */ diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h new file mode 100644 index 000000000000..984d041a3e3b --- /dev/null +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#ifndef __LINUX_KVM_RISCV_H +#define __LINUX_KVM_RISCV_H + +#ifndef __ASSEMBLY__ + +#include +#include + +#define __KVM_HAVE_READONLY_MEM + +#define KVM_COALESCED_MMIO_PAGE_OFFSET 1 + +/* for KVM_GET_REGS and KVM_SET_REGS */ +struct kvm_regs { +}; + +/* for KVM_GET_FPU and KVM_SET_FPU */ +struct kvm_fpu { +}; + +/* KVM Debug exit structure */ +struct kvm_debug_exit_arch { +}; + +/* for KVM_SET_GUEST_DEBUG */ +struct kvm_guest_debug_arch { +}; + +/* definition of registers in kvm_run */ +struct kvm_sync_regs { +}; + +/* dummy definition */ +struct kvm_sregs { +}; + +#endif + +#endif /* __LINUX_KVM_RISCV_H */ diff --git a/arch/riscv/kvm/Kconfig b/arch/riscv/kvm/Kconfig new file mode 100644 index 000000000000..88edd477b3a8 --- /dev/null +++ b/arch/riscv/kvm/Kconfig @@ -0,0 +1,33 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# KVM configuration +# + +source "virt/kvm/Kconfig" + +menuconfig VIRTUALIZATION + bool "Virtualization" + help + Say Y here to get to see options for using your Linux host to run + other operating systems inside virtual machines (guests). + This option alone does not add any kernel code. + + If you say N, all options in this submenu will be skipped and + disabled. + +if VIRTUALIZATION + +config KVM + tristate "Kernel-based Virtual Machine (KVM) support (EXPERIMENTAL)" + depends on RISCV_SBI && MMU + select PREEMPT_NOTIFIERS + select ANON_INODES + select KVM_MMIO + select HAVE_KVM_VCPU_ASYNC_IOCTL + select SRCU + help + Support hosting virtualized guest machines. + + If unsure, say N. + +endif # VIRTUALIZATION diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile new file mode 100644 index 000000000000..37b5a59d4f4f --- /dev/null +++ b/arch/riscv/kvm/Makefile @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: GPL-2.0 +# Makefile for RISC-V KVM support +# + +common-objs-y = $(addprefix ../../../virt/kvm/, kvm_main.o coalesced_mmio.o) + +ccflags-y := -Ivirt/kvm -Iarch/riscv/kvm + +kvm-objs := $(common-objs-y) + +kvm-objs += main.o vm.o mmu.o vcpu.o vcpu_exit.o + +obj-$(CONFIG_KVM) += kvm.o diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c new file mode 100644 index 000000000000..c717d37fd87f --- /dev/null +++ b/arch/riscv/kvm/main.c @@ -0,0 +1,95 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include +#include +#include +#include + +long kvm_arch_dev_ioctl(struct file *filp, + unsigned int ioctl, unsigned long arg) +{ + return -EINVAL; +} + +int kvm_arch_check_processor_compat(void *opaque) +{ + return 0; +} + +int kvm_arch_hardware_setup(void *opaque) +{ + return 0; +} + +int kvm_arch_hardware_enable(void) +{ + unsigned long hideleg, hedeleg; + + hedeleg = 0; + hedeleg |= (1UL << EXC_INST_MISALIGNED); + hedeleg |= (1UL << EXC_BREAKPOINT); + hedeleg |= (1UL << EXC_SYSCALL); + hedeleg |= (1UL << EXC_INST_PAGE_FAULT); + hedeleg |= (1UL << EXC_LOAD_PAGE_FAULT); + hedeleg |= (1UL << EXC_STORE_PAGE_FAULT); + csr_write(CSR_HEDELEG, hedeleg); + + hideleg = 0; + hideleg |= (1UL << IRQ_VS_SOFT); + hideleg |= (1UL << IRQ_VS_TIMER); + hideleg |= (1UL << IRQ_VS_EXT); + csr_write(CSR_HIDELEG, hideleg); + + csr_write(CSR_HCOUNTEREN, -1UL); + + csr_write(CSR_HVIP, 0); + + return 0; +} + +void kvm_arch_hardware_disable(void) +{ + csr_write(CSR_HEDELEG, 0); + csr_write(CSR_HIDELEG, 0); +} + +int kvm_arch_init(void *opaque) +{ + if (!riscv_isa_extension_available(NULL, h)) { + kvm_info("hypervisor extension not available\n"); + return -ENODEV; + } + + if (sbi_spec_is_0_1()) { + kvm_info("require SBI v0.2 or higher\n"); + return -ENODEV; + } + + if (sbi_probe_extension(SBI_EXT_RFENCE) <= 0) { + kvm_info("require SBI RFENCE extension\n"); + return -ENODEV; + } + + kvm_info("hypervisor extension available\n"); + + return 0; +} + +void kvm_arch_exit(void) +{ +} + +static int riscv_kvm_init(void) +{ + return kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE); +} +module_init(riscv_kvm_init); diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c new file mode 100644 index 000000000000..abfd2b22fa8e --- /dev/null +++ b/arch/riscv/kvm/mmu.c @@ -0,0 +1,80 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) +{ +} + +void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free) +{ +} + +void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) +{ +} + +void kvm_arch_flush_shadow_all(struct kvm *kvm) +{ + /* TODO: */ +} + +void kvm_arch_flush_shadow_memslot(struct kvm *kvm, + struct kvm_memory_slot *slot) +{ +} + +void kvm_arch_commit_memory_region(struct kvm *kvm, + const struct kvm_userspace_memory_region *mem, + struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change) +{ + /* TODO: */ +} + +int kvm_arch_prepare_memory_region(struct kvm *kvm, + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region *mem, + enum kvm_mr_change change) +{ + /* TODO: */ + return 0; +} + +void kvm_riscv_stage2_flush_cache(struct kvm_vcpu *vcpu) +{ + /* TODO: */ +} + +int kvm_riscv_stage2_alloc_pgd(struct kvm *kvm) +{ + /* TODO: */ + return 0; +} + +void kvm_riscv_stage2_free_pgd(struct kvm *kvm) +{ + /* TODO: */ +} + +void kvm_riscv_stage2_update_hgatp(struct kvm_vcpu *vcpu) +{ + /* TODO: */ +} diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c new file mode 100644 index 000000000000..d76cecf93de4 --- /dev/null +++ b/arch/riscv/kvm/vcpu.c @@ -0,0 +1,311 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct kvm_stats_debugfs_item debugfs_entries[] = { + VCPU_STAT("halt_successful_poll", halt_successful_poll), + VCPU_STAT("halt_attempted_poll", halt_attempted_poll), + VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns), + VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns), + VCPU_STAT("halt_poll_invalid", halt_poll_invalid), + VCPU_STAT("halt_wakeup", halt_wakeup), + VCPU_STAT("ecall_exit_stat", ecall_exit_stat), + VCPU_STAT("wfi_exit_stat", wfi_exit_stat), + VCPU_STAT("mmio_exit_user", mmio_exit_user), + VCPU_STAT("mmio_exit_kernel", mmio_exit_kernel), + VCPU_STAT("exits", exits), + { NULL } +}; + +int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) +{ + return 0; +} + +int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) +{ + /* TODO: */ + return 0; +} + +void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) +{ +} + +int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) +{ + /* TODO: */ + return 0; +} + +void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) +{ + /* TODO: */ +} + +int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) +{ + /* TODO: */ + return 0; +} + +void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) +{ +} + +void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) +{ +} + +int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) +{ + /* TODO: */ + return 0; +} + +int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) +{ + /* TODO: */ + return 0; +} + +bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) +{ + /* TODO: */ + return false; +} + +vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) +{ + return VM_FAULT_SIGBUS; +} + +long kvm_arch_vcpu_async_ioctl(struct file *filp, + unsigned int ioctl, unsigned long arg) +{ + /* TODO; */ + return -ENOIOCTLCMD; +} + +long kvm_arch_vcpu_ioctl(struct file *filp, + unsigned int ioctl, unsigned long arg) +{ + /* TODO: */ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu, + struct kvm_sregs *sregs) +{ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, + struct kvm_sregs *sregs) +{ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) +{ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) +{ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu, + struct kvm_translation *tr) +{ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) +{ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) +{ + return -EINVAL; +} + +int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu, + struct kvm_mp_state *mp_state) +{ + /* TODO: */ + return 0; +} + +int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu, + struct kvm_mp_state *mp_state) +{ + /* TODO: */ + return 0; +} + +int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, + struct kvm_guest_debug *dbg) +{ + /* TODO; To be implemented later. */ + return -EINVAL; +} + +void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + /* TODO: */ + + kvm_riscv_stage2_update_hgatp(vcpu); +} + +void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) +{ + /* TODO: */ +} + +static void kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu) +{ + /* TODO: */ +} + +int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) +{ + int ret; + struct kvm_cpu_trap trap; + struct kvm_run *run = vcpu->run; + + vcpu->arch.srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); + + /* Process MMIO value returned from user-space */ + if (run->exit_reason == KVM_EXIT_MMIO) { + ret = kvm_riscv_vcpu_mmio_return(vcpu, vcpu->run); + if (ret) { + srcu_read_unlock(&vcpu->kvm->srcu, vcpu->arch.srcu_idx); + return ret; + } + } + + if (run->immediate_exit) { + srcu_read_unlock(&vcpu->kvm->srcu, vcpu->arch.srcu_idx); + return -EINTR; + } + + vcpu_load(vcpu); + + kvm_sigset_activate(vcpu); + + ret = 1; + run->exit_reason = KVM_EXIT_UNKNOWN; + while (ret > 0) { + /* Check conditions before entering the guest */ + cond_resched(); + + kvm_riscv_check_vcpu_requests(vcpu); + + preempt_disable(); + + local_irq_disable(); + + /* + * Exit if we have a signal pending so that we can deliver + * the signal to user space. + */ + if (signal_pending(current)) { + ret = -EINTR; + run->exit_reason = KVM_EXIT_INTR; + } + + /* + * Ensure we set mode to IN_GUEST_MODE after we disable + * interrupts and before the final VCPU requests check. + * See the comment in kvm_vcpu_exiting_guest_mode() and + * Documentation/virtual/kvm/vcpu-requests.rst + */ + vcpu->mode = IN_GUEST_MODE; + + srcu_read_unlock(&vcpu->kvm->srcu, vcpu->arch.srcu_idx); + smp_mb__after_srcu_read_unlock(); + + if (ret <= 0 || + kvm_request_pending(vcpu)) { + vcpu->mode = OUTSIDE_GUEST_MODE; + local_irq_enable(); + preempt_enable(); + vcpu->arch.srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); + continue; + } + + guest_enter_irqoff(); + + __kvm_riscv_switch_to(&vcpu->arch); + + vcpu->mode = OUTSIDE_GUEST_MODE; + vcpu->stat.exits++; + + /* + * Save SCAUSE, STVAL, HTVAL, and HTINST because we might + * get an interrupt between __kvm_riscv_switch_to() and + * local_irq_enable() which can potentially change CSRs. + */ + trap.sepc = 0; + trap.scause = csr_read(CSR_SCAUSE); + trap.stval = csr_read(CSR_STVAL); + trap.htval = csr_read(CSR_HTVAL); + trap.htinst = csr_read(CSR_HTINST); + + /* + * We may have taken a host interrupt in VS/VU-mode (i.e. + * while executing the guest). This interrupt is still + * pending, as we haven't serviced it yet! + * + * We're now back in HS-mode with interrupts disabled + * so enabling the interrupts now will have the effect + * of taking the interrupt again, in HS-mode this time. + */ + local_irq_enable(); + + /* + * We do local_irq_enable() before calling guest_exit() so + * that if a timer interrupt hits while running the guest + * we account that tick as being spent in the guest. We + * enable preemption after calling guest_exit() so that if + * we get preempted we make sure ticks after that is not + * counted as guest time. + */ + guest_exit(); + + preempt_enable(); + + vcpu->arch.srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); + + ret = kvm_riscv_vcpu_exit(vcpu, run, &trap); + } + + kvm_sigset_deactivate(vcpu); + + vcpu_put(vcpu); + + srcu_read_unlock(&vcpu->kvm->srcu, vcpu->arch.srcu_idx); + + return ret; +} diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c new file mode 100644 index 000000000000..4484e9200fe4 --- /dev/null +++ b/arch/riscv/kvm/vcpu_exit.c @@ -0,0 +1,35 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include + +/** + * kvm_riscv_vcpu_mmio_return -- Handle MMIO loads after user space emulation + * or in-kernel IO emulation + * + * @vcpu: The VCPU pointer + * @run: The VCPU run struct containing the mmio data + */ +int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) +{ + /* TODO: */ + return 0; +} + +/* + * Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on + * proper exit to userspace. + */ +int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, + struct kvm_cpu_trap *trap) +{ + /* TODO: */ + return 0; +} diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c new file mode 100644 index 000000000000..d6776b4819bb --- /dev/null +++ b/arch/riscv/kvm/vm.c @@ -0,0 +1,79 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include +#include + +int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) +{ + /* TODO: To be added later. */ + return -EOPNOTSUPP; +} + +int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) +{ + int r; + + r = kvm_riscv_stage2_alloc_pgd(kvm); + if (r) + return r; + + return 0; +} + +void kvm_arch_destroy_vm(struct kvm *kvm) +{ + int i; + + for (i = 0; i < KVM_MAX_VCPUS; ++i) { + if (kvm->vcpus[i]) { + kvm_arch_vcpu_destroy(kvm->vcpus[i]); + kvm->vcpus[i] = NULL; + } + } +} + +int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) +{ + int r; + + switch (ext) { + case KVM_CAP_DEVICE_CTRL: + case KVM_CAP_USER_MEMORY: + case KVM_CAP_DESTROY_MEMORY_REGION_WORKS: + case KVM_CAP_ONE_REG: + case KVM_CAP_READONLY_MEM: + case KVM_CAP_MP_STATE: + case KVM_CAP_IMMEDIATE_EXIT: + r = 1; + break; + case KVM_CAP_NR_VCPUS: + r = num_online_cpus(); + break; + case KVM_CAP_MAX_VCPUS: + r = KVM_MAX_VCPUS; + break; + case KVM_CAP_NR_MEMSLOTS: + r = KVM_USER_MEM_SLOTS; + break; + default: + r = 0; + break; + } + + return r; +} + +long kvm_arch_vm_ioctl(struct file *filp, + unsigned int ioctl, unsigned long arg) +{ + return -EINVAL; +} -- 2.25.1