Received: by 2002:a25:b794:0:0:0:0:0 with SMTP id n20csp2325092ybh; Sun, 4 Aug 2019 22:50:06 -0700 (PDT) X-Google-Smtp-Source: APXvYqxIeK9uYSXPYrV9PLurzmMZaWnwd3X0/UqXoyv5m8tr+QiW75+NU+MuxMLS/4/IRlp90TB7 X-Received: by 2002:a63:1d4:: with SMTP id 203mr27562914pgb.441.1564984206132; Sun, 04 Aug 2019 22:50:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564984206; cv=none; d=google.com; s=arc-20160816; b=Dufc9kFbkoxitCa2TSqB6vUQgMmZrXpW6x0D/hpEwn3C4tniSLqLwMsT2lxMH+7kmX +fn6ITfzchPdwXiKS7B0YmRigUX+qVawCUP4LNhpI7JnCOCxbMGFr1gqsHpiONn7qT19 U0FHghajkEuoNFZpCUrJjMx3RDsWRH1VqoINf5gAoj8mjMkZJsjzDp8Tz5dB9KEADUCM p0ea4C42aQXYbQiNeYnI6c/+xzkVoHzSQHNzE7i2Odhs5QuCj9WDdwd9uqNe2TVnpcC7 CjAywIoXmru/c/MuFmW3nFpHvQeP+5FsvHQCAaZb27zckYOhHwkkGmTXfk5N/Sw7+JaV oT3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=1TgAgMN1acZNwv5Q61grmXOTBBeUYwY88MxdxlFsIgo=; b=zc86v76/KfqtbIIPpBBj0rt5Bd0oyrlU1hl+XLJdofEDySyZ1R8/vUlProRs4VNy/C 9Qes8ucQwyvdiliy1qU4BK1is+PhJ7D5jwjNrmEgOrTFECldoywaUpBOh7b9akbBye1B JOhlxYj5nreTB8KzNnSFkBQZ/TTzpxdWR9akOY/uI0l8BLovG9O/ZhKRlnG84Xs4IX6n NW8XIqFwis4PXwn2Jdq2+i1XlEk5UmsglDL8H0GcY/IbVEEOR88FQtALRLwIPExha8XL FWoy75joLNeKDGalaer9fO5XjKvdTMA9Te+3gTyrktSFWjHAKh6i3cDdti8aIwy2isba zH9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@brainfault-org.20150623.gappssmtp.com header.s=20150623 header.b=sVnmdtBT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a19si47312540pgw.234.2019.08.04.22.49.49; Sun, 04 Aug 2019 22:50:06 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@brainfault-org.20150623.gappssmtp.com header.s=20150623 header.b=sVnmdtBT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727259AbfHEFsu (ORCPT + 99 others); Mon, 5 Aug 2019 01:48:50 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:39662 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725976AbfHEFst (ORCPT ); Mon, 5 Aug 2019 01:48:49 -0400 Received: by mail-wm1-f67.google.com with SMTP id u25so61346186wmc.4 for ; Sun, 04 Aug 2019 22:48:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=1TgAgMN1acZNwv5Q61grmXOTBBeUYwY88MxdxlFsIgo=; b=sVnmdtBTLTowplyB43mPn9+XoIJom4npEqgCUIIA9I6z56mRtxYVMUUcDlq+XvFMl0 SN6cLhPICpsyH32V+Pk3fcQsMLNfI8XWZHZwK35OZEmkchXkHZuB2R7U9Eo2vMf+8eZc 4Wk2w/b2Q0K+QYb5s/FZZ++0Zu4jh5eYgUzfhRNP9nUtl1LG1eQPQyDFiTN+gW2fkht3 ukkGxL+AVLE41Lp2MCbJZ4PFjOU7foC7WwD4VgSBd9EkBfKFfqO9aH43GMaCxkzPcyb/ i8QUc3Pb1e8jaxMzV8q1CdLlAQE0pQlf87lWDs3irHeCIAkKkcaq6jEhhjCJ03SVqrDK gISQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=1TgAgMN1acZNwv5Q61grmXOTBBeUYwY88MxdxlFsIgo=; b=iThMSPUfGZLierpaSmwhSNytQGiXY0YLYZP46zghnu4H50sfXbmkqKvA0c6zTg9pda 9gP/np0IcG9MXL05VYsm38T9wgbdd5ENFWJGJqfTFh6PB5MFSMkgepdTmT5iXEbuEIaT OPioKPSfp5y89JVyplaGRCckMhe5iVUE6CDhX4n8ITEUb7CbkCe42ItsaX+FsoKliyTy EiDh69/cRfyXyIS32iDMrt1BKmam/0kgFrz9c/n5dRmLRwGlkI5uFTCQFb0UU4UMZAgQ 4bpQQ0QgH1PC7TlCvNuxni4/HRm0oZ6sEKetPAMuLONzxsFjKPdhqR3iTUyUnaLwCihQ flgg== X-Gm-Message-State: APjAAAX7hGHdum0t3RPnZb6S0xLNQDgzE1WU8djeBUTwbzO0wZy5Ttip zwrwOS01iRnRk+aKYqaZBmW2I84wz0j92ggg04Wv1Q== X-Received: by 2002:a1c:cfc5:: with SMTP id f188mr15160196wmg.24.1564984126543; Sun, 04 Aug 2019 22:48:46 -0700 (PDT) MIME-Version: 1.0 References: <20190802074620.115029-1-anup.patel@wdc.com> <20190802074620.115029-5-anup.patel@wdc.com> <9f30d2b6-fa2c-22ff-e597-b9fbd1c700ff@redhat.com> In-Reply-To: <9f30d2b6-fa2c-22ff-e597-b9fbd1c700ff@redhat.com> From: Anup Patel Date: Mon, 5 Aug 2019 11:18:34 +0530 Message-ID: Subject: Re: [RFC PATCH v2 04/19] RISC-V: Add initial skeletal KVM support To: Paolo Bonzini Cc: Anup Patel , Palmer Dabbelt , Paul Walmsley , Radim K , Daniel Lezcano , Thomas Gleixner , Atish Patra , Alistair Francis , Damien Le Moal , Christoph Hellwig , "kvm@vger.kernel.org" , "linux-riscv@lists.infradead.org" , "linux-kernel@vger.kernel.org" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Aug 2, 2019 at 2:31 PM Paolo Bonzini wrote: > > On 02/08/19 09:47, Anup Patel wrote: > > +static void kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu) > > +{ > > + if (kvm_request_pending(vcpu)) { > > + /* TODO: */ > > + > > + /* > > + * Clear IRQ_PENDING requests that were made to guarantee > > + * that a VCPU sees new virtual interrupts. > > + */ > > + kvm_check_request(KVM_REQ_IRQ_PENDING, vcpu); > > + } > > +} > > This kvm_check_request can go away (as it does in patch 6). Argh, I should have removed it in v2 itself. Thanks for catching. I will update. > > > +int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) > > +{ > > + int ret; > > + unsigned long scause, stval; > > You need to wrap this with srcu_read_lock/srcu_read_unlock, otherwise > stage2_page_fault can access freed memslot arrays. (ARM doesn't have > this issue because it does not have to decode instructions on MMIO faults). Looking at KVM ARM/ARM64, I was not sure about use of kvm->srcu. Thanks for clarifying. I will use kvm->srcu like you suggested. > > That is, > > vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); > > > + /* Process MMIO value returned from user-space */ > > + if (run->exit_reason == KVM_EXIT_MMIO) { > > + ret = kvm_riscv_vcpu_mmio_return(vcpu, vcpu->run); > > + if (ret) > > + return ret; > > + } > > + > > + if (run->immediate_exit) > > + return -EINTR; > > + > > + vcpu_load(vcpu); > > + > > + kvm_sigset_activate(vcpu); > > + > > + ret = 1; > > + run->exit_reason = KVM_EXIT_UNKNOWN; > > + while (ret > 0) { > > + /* Check conditions before entering the guest */ > > + cond_resched(); > > + > > + kvm_riscv_check_vcpu_requests(vcpu); > > + > > + preempt_disable(); > > + > > + local_irq_disable(); > > + > > + /* > > + * Exit if we have a signal pending so that we can deliver > > + * the signal to user space. > > + */ > > + if (signal_pending(current)) { > > + ret = -EINTR; > > + run->exit_reason = KVM_EXIT_INTR; > > + } > > Add an srcu_read_unlock here (and then the smp_store_mb can become > smp_mb__after_srcu_read_unlock + WRITE_ONCE). Sure, I will update. > > > > + /* > > + * Ensure we set mode to IN_GUEST_MODE after we disable > > + * interrupts and before the final VCPU requests check. > > + * See the comment in kvm_vcpu_exiting_guest_mode() and > > + * Documentation/virtual/kvm/vcpu-requests.rst > > + */ > > + smp_store_mb(vcpu->mode, IN_GUEST_MODE); > > + > > + if (ret <= 0 || > > + kvm_request_pending(vcpu)) { > > + vcpu->mode = OUTSIDE_GUEST_MODE; > > + local_irq_enable(); > > + preempt_enable(); > > + continue; > > + } > > + > > + guest_enter_irqoff(); > > + > > + __kvm_riscv_switch_to(&vcpu->arch); > > + > > + vcpu->mode = OUTSIDE_GUEST_MODE; > > + vcpu->stat.exits++; > > + > > + /* Save SCAUSE and STVAL because we might get an interrupt > > + * between __kvm_riscv_switch_to() and local_irq_enable() > > + * which can potentially overwrite SCAUSE and STVAL. > > + */ > > + scause = csr_read(CSR_SCAUSE); > > + stval = csr_read(CSR_STVAL); > > + > > + /* > > + * We may have taken a host interrupt in VS/VU-mode (i.e. > > + * while executing the guest). This interrupt is still > > + * pending, as we haven't serviced it yet! > > + * > > + * We're now back in HS-mode with interrupts disabled > > + * so enabling the interrupts now will have the effect > > + * of taking the interrupt again, in HS-mode this time. > > + */ > > + local_irq_enable(); > > + > > + /* > > + * We do local_irq_enable() before calling guest_exit() so > > + * that if a timer interrupt hits while running the guest > > + * we account that tick as being spent in the guest. We > > + * enable preemption after calling guest_exit() so that if > > + * we get preempted we make sure ticks after that is not > > + * counted as guest time. > > + */ > > + guest_exit(); > > + > > + preempt_enable(); > > And another srcu_read_lock here. Using vcpu->srcu_idx instead of a > local variable also allows system_opcode_insn to wrap kvm_vcpu_block > with a srcu_read_unlock/srcu_read_lock pair. Okay. > > > + ret = kvm_riscv_vcpu_exit(vcpu, run, scause, stval); > > + } > > + > > + kvm_sigset_deactivate(vcpu); > > And finally srcu_read_unlock here. Okay. Regards, Anup