Received: by 2002:ab2:710b:0:b0:1ef:a325:1205 with SMTP id z11csp563415lql; Mon, 11 Mar 2024 10:23:39 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWUq4sVZ54jJ0pE8H9rtQyZ02DRGMO6+7YTq3fP8MEDqiENIOVbZK4vahl9y3nKtroiKBU3psrSaU69zmXl5R9inpO+kfEP1xeNYQZzqA== X-Google-Smtp-Source: AGHT+IGhIRPUZzkXOE0tYuZqI9CHtSFY4aE0ZN5KYPcXS8hZr8ZvWjZ6HVXQlAxuGj/wNdyvYrAj X-Received: by 2002:a50:c05d:0:b0:565:98c5:6c38 with SMTP id u29-20020a50c05d000000b0056598c56c38mr5237477edd.7.1710177819333; Mon, 11 Mar 2024 10:23:39 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1710177819; cv=pass; d=google.com; s=arc-20160816; b=agvFD4LztGIgY9074TdPL0WuWp7atTOmb6HtZQ5gkvy+EAg5zciM9v/TSKhpjYBlbV m2Yt1/mD6Ggb7fL9NOKAWcc2wnoyxZWuFOwW+UEJ1xUOjhUsIoVxAUvc1AAI4uP5t1v6 3q9MfzjolZVVlJ8pCrg/2KGV++N1vO9/T7Pjz2oJouHgH98qTvPT8O6RTIci4aXV9bQV t+tKBa7WuwC8Y6kmNaH8ddoJrShI20iKkoLi8elS3LHd9livMFM7yp6gSdbLXqUvYRVA 7crplFLWJPLCEi/l3Ns1KKeTrE1L7CtFcDwZr5PL/tJbglLh9TdfwG03Ak6l4htAh81Y kxSg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=/HhEj7VyFIjZW59B2BWeffYMQd7D2SERuY842EgbKa4=; fh=J+xtE92WArVUmTnE4HMgNwIUtdc1tM+HoV2K81vmP8k=; b=wGJmuPxOBKR7CnDv/StmeYuCndZHmTU0Lb6bf7p3AF9Nhrmsk796I5rab/up3tOWGF o4BwhI0yfYfFJVzWny6O5qr1QZjEcUSHpdUlWAG3gYjOmThzZLoTZOLVtUdpPPVCBrtf qxrYnHC1udQJh11jHN/6E+dA9Xoq7KOWw16za9GOmMjbiINpnVu4NNf6QlC3IjlT97XT uFjLIsw9/6l3ua0m3QMQlCrJYwNWHUr5wI9pI7dzG5cT7HED+AQn6ULKMnymwa+D73Hp KwyYxXRPuFY1aEdsrxKTdmnUxMu1As/arQgKl/n7pSmnecZo9twq+/w9E/yrgx6pCXSi 2/7w==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=ztfqIqbC; arc=pass (i=1 spf=pass spfdomain=flex--seanjc.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-99301-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-99301-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id fj26-20020a0564022b9a00b00568566ef933si932694edb.286.2024.03.11.10.23.39 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Mar 2024 10:23:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-99301-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=ztfqIqbC; arc=pass (i=1 spf=pass spfdomain=flex--seanjc.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-99301-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-99301-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 172131F22F31 for ; Mon, 11 Mar 2024 17:23:39 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 41F924D9E6; Mon, 11 Mar 2024 17:23:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ztfqIqbC" Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2CAC3F8F6 for ; Mon, 11 Mar 2024 17:23:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710177812; cv=none; b=YwgLHucwNKZqh2FJwSgLhOW1bdKyEQcwAhytsUo3xGSGUSUSB5VlWFZbd20nnLArQpIYR0Ou+oTXQuWie6Lk6k5zTw5AEwG8jPI8MIEx5EwKTdSfBNWjje8JFkz0u03AYarsF5DImU1FFBToBa9dru+4cm8kZJStXy+7BPhgJCw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710177812; c=relaxed/simple; bh=yZFlhLhcrB6kOYi8DhWwE3UELCDKt6BcqMyQlgjCIUw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=g/Y/alRd+qLCoESq1Y7xgl1ebJWWU+PcEELQfQb9IwmoiN5yAvQd/sZHDicYxeZzhiPLzGoMA2QNaRIIRnZWEgG6/inVivF1cvo4jW5FAqlfdgQwq3XuaMu6lHqavZ3QUn4SZJPSyOEfUIKFjNwq4eG3HtrbAIImPGhyTDio5LY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ztfqIqbC; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc6b26845cdso6954862276.3 for ; Mon, 11 Mar 2024 10:23:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1710177810; x=1710782610; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/HhEj7VyFIjZW59B2BWeffYMQd7D2SERuY842EgbKa4=; b=ztfqIqbCzFZ/pFnoANMPCTO6KTWWGamV1tZz42QEOrkhRWh8G3wxmcTRIHsUrYwqE7 Lrk9rmExio4sApihshiBZ66l+u0Z+6gVWBLIBSEsDUd0b+mYKPX6hFdbo272SYcxvY6H /Ft9lUy+yaHgKOH0fvFh2wH1le/LZx5MwCU880GXCd/D++qWb0I0OL93fpySu/sd2uvo vkLDJVYzR/VE6+8Fk8Ac2x7KDNNrPQDnePObTsH0sy1PFFw3Vym+eebj+yWDaTEPXQ+F VhcoQi67eYiTp/a0cBsfrDlQSAFEwsmb/9OTIvfy7dNHARU+MP4YV2idanzbB4Mi9brA L9qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710177810; x=1710782610; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/HhEj7VyFIjZW59B2BWeffYMQd7D2SERuY842EgbKa4=; b=rzeaHmKhq3peR/I4tsJf56EOgOSwJCruL+uv1H9cjJv8xjFvVfGjUNtEguwcIKax0Q X03HEVPBNW99AqdiNr/s2gjZ8CeDMkNta/mrRS5uiyewRqFM8HjYP8U6IWxEmWkjhPsI rd+7I0gD78TskC2UFkP1F0avYY0dg63HusPclGaxpxnm/Rsbaup5ZFGFOKLVzRYHvL6Q PbK/fYpi7FR8cyWVXI9ZOFxEFSuGKr1zBwmEUldczpGMNxusKF2fCNF63ZuQdS3Ka6n/ npPCG2ITiWUy6UC9Od53aKjFQQfWa1ihpr6DcHiJfqHspxaAWfLkebBE+TBA+x+OU1lf S7nQ== X-Forwarded-Encrypted: i=1; AJvYcCW8leP8iyjrxR1T/V96xFvKkUKc/rdmZ6WHENiWhLZkilaFOzml71po6dR9GUAl4fLXubD/IPXrR6bpz0TiJ5hUzjq26azCPNv6vsot X-Gm-Message-State: AOJu0YwmfiAR24XbyVJUIgv8qcJ4pgFPbBzqtuqbUdKRBL+Xxd3y7fcU QWOR5QVZpjB8V6F432s7pU6gNF0sg3K/EB7J8nGtjXRLtT/a4OUkYwLZg4w8uvGwjhDWVHgW0X2 xkA== X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1889:b0:dc6:dfc6:4207 with SMTP id cj9-20020a056902188900b00dc6dfc64207mr1961789ybb.10.1710177809878; Mon, 11 Mar 2024 10:23:29 -0700 (PDT) Date: Mon, 11 Mar 2024 10:23:28 -0700 In-Reply-To: <012b59708114ba121735769de94756fa5af3204d.1709288671.git.isaku.yamahata@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <012b59708114ba121735769de94756fa5af3204d.1709288671.git.isaku.yamahata@intel.com> Message-ID: Subject: Re: [RFC PATCH 2/8] KVM: Add KVM_MAP_MEMORY vcpu ioctl to pre-populate guest memory From: Sean Christopherson To: isaku.yamahata@intel.com Cc: kvm@vger.kernel.org, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, Paolo Bonzini , Michael Roth , David Matlack , Federico Parola Content-Type: text/plain; charset="us-ascii" On Fri, Mar 01, 2024, isaku.yamahata@intel.com wrote: > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index d1fd9cb5d037..d77c9b79d76b 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -4419,6 +4419,69 @@ static int kvm_vcpu_ioctl_get_stats_fd(struct kvm_vcpu *vcpu) > return fd; > } > > +__weak int kvm_arch_vcpu_pre_map_memory(struct kvm_vcpu *vcpu) > +{ > + return -EOPNOTSUPP; > +} > + > +__weak int kvm_arch_vcpu_map_memory(struct kvm_vcpu *vcpu, > + struct kvm_memory_mapping *mapping) > +{ > + return -EOPNOTSUPP; > +} > + > +static int kvm_vcpu_map_memory(struct kvm_vcpu *vcpu, > + struct kvm_memory_mapping *mapping) > +{ > + bool added = false; > + int idx, r = 0; Pointless initialization of 'r'. > + > + if (mapping->flags & ~(KVM_MEMORY_MAPPING_FLAG_WRITE | > + KVM_MEMORY_MAPPING_FLAG_EXEC | > + KVM_MEMORY_MAPPING_FLAG_USER | > + KVM_MEMORY_MAPPING_FLAG_PRIVATE)) > + return -EINVAL; > + if ((mapping->flags & KVM_MEMORY_MAPPING_FLAG_PRIVATE) && > + !kvm_arch_has_private_mem(vcpu->kvm)) > + return -EINVAL; > + > + /* Sanity check */ Pointless comment. > + if (!IS_ALIGNED(mapping->source, PAGE_SIZE) || > + !mapping->nr_pages || > + mapping->base_gfn + mapping->nr_pages <= mapping->base_gfn) > + return -EINVAL; > + > + vcpu_load(vcpu); > + idx = srcu_read_lock(&vcpu->kvm->srcu); > + r = kvm_arch_vcpu_pre_map_memory(vcpu); This hooks is unnecessary, x86's kvm_mmu_reload() is optimized for the happy path where the MMU is already loaded. Just make the call from kvm_arch_vcpu_map_memory(). > + if (r) > + return r; Which is a good thing, because this leaks the SRCU lock. > + > + while (mapping->nr_pages) { > + if (signal_pending(current)) { > + r = -ERESTARTSYS; Why -ERESTARTSYS instead of -EINTR? The latter is KVM's typical response to a pending signal. > + break; > + } > + > + if (need_resched()) No need to manually check need_resched(), the below is a _conditional_ resched. The reason KVM explicitly checks need_resched() in MMU flows is because KVM needs to drop mmu_lock before rescheduling, i.e. calling cond_resched() directly would try to schedule() while holding a spinlock. > + cond_resched(); > + > + r = kvm_arch_vcpu_map_memory(vcpu, mapping); > + if (r) > + break; > + > + added = true; > + } > + > + srcu_read_unlock(&vcpu->kvm->srcu, idx); > + vcpu_put(vcpu); > + > + if (added && mapping->nr_pages > 0) > + r = -EAGAIN; No, this clobbers 'r', which might hold a fatal error code. I don't see any reason for common code to ever force -EAGAIN, it can't possibly know if trying again is reasonable. > + > + return r; > +}