Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp899588iog; Thu, 30 Jun 2022 12:24:26 -0700 (PDT) X-Google-Smtp-Source: AGRyM1sR5fbqEGvB7a7So+V0KLlmqRGfhpeOZrCgt4QRh0mT574pINPRX0wVFhc1IKNLxXMV1Qdh X-Received: by 2002:a17:906:6a03:b0:726:c0e5:d58a with SMTP id qw3-20020a1709066a0300b00726c0e5d58amr10481539ejc.288.1656617066175; Thu, 30 Jun 2022 12:24:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656617066; cv=none; d=google.com; s=arc-20160816; b=XPDjohtEByb6okXGU6pX+Yo5ZMkY6aDuiLsed9KgVSSxpk3iw8lmkaKC3uPHHJnpqi rNwjsTsiZwy1pctkFA5CBpyerM7mapoxQAamrMr4LMZXATW5+nOybHf2skZ6SvpeyDcF QLsw0QMiNYLf1JMtpCimZXuV8/9hVIIFaqDGeFq0u0Btw0wAT3PgYe+G/SyLEgt4TSGQ /i1eBypDObZB8OiqzCkx5NOz6xBJtWdlwEg9Fo8+aPEi/GoOWaXXAOeNN0lidy96cTe9 gBWxFTIIIYP04CRrlJwh0uX/WvdvAhT4UqVqQIkCuBsF1Vq508Q+z9l1kaj2cpw0d+78 uDVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=CseVMsc9DMcvnremI2IMCVn9RXMEwKZ3PfQxnseEk1Y=; b=b08lX+RqxEL7146orywhVnx6IXf8qQjHcaJ6LspoWui61uriLytk1xhZV03PxdQoMM LKaJW+If9yVoTWUnY4Qnb5n1DbcGAUsTKPHt9d3UE+wrs88/xTtmo8BdPZNwp/Trh/Bu e8RFqqTwAZKaonS5gHxLu9+yfB8llZw9vgOUIU44OFt5RX09o1V0Gyx81hYXFyOY5SNF GljJ//9DlWjFVTNlG4yBhJj/QbSGi6BB8NF/ouv2dq3aJqIWOYVnBacySKke7bwDAjNm 8TDSTe+I8toeg4YgdWuB3pfZ6L8ZC3CAdSRceNujfwiq+EjVEJJiZHdpWyidbPD0n2Eo u5uw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=L86qxID6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cw12-20020a056402228c00b00435923bf5c1si22095834edb.418.2022.06.30.12.23.59; Thu, 30 Jun 2022 12:24:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=L86qxID6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234707AbiF3TO3 (ORCPT + 99 others); Thu, 30 Jun 2022 15:14:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229906AbiF3TOZ (ORCPT ); Thu, 30 Jun 2022 15:14:25 -0400 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A612F3982F for ; Thu, 30 Jun 2022 12:14:24 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id x4so337719pfq.2 for ; Thu, 30 Jun 2022 12:14:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=CseVMsc9DMcvnremI2IMCVn9RXMEwKZ3PfQxnseEk1Y=; b=L86qxID6q2M+JSm6WJlw/ywbiK2Bd4F4OqP2crvM+fBErSUs7vAk8+S+Iq/oWCc3S3 A/zD733aSeW/NQtqDBUXacjPlB7RxpFMnf6lRRa/+NdtBmLjMtHmcPsoOWwkq8KCWT59 Wrjd2sZMC+z5N1mQaKCk8s3sRW71BeCnVhZwSRy715zMP+SNJe7eU9WYohudOv1KNRGA /2Hl1gt/4XtlScYZ3mwC111MIu9O+AEpZ9xmxnKCHZmdZRhh6DrJvNuc5sQfRD0Ka1kU JrjpbfOKNuX8+d69m9LSTBv0r1O0mwEp5rUB+SIBwil7KVtufeEqPNZkhvO1MatEsuMw lriQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=CseVMsc9DMcvnremI2IMCVn9RXMEwKZ3PfQxnseEk1Y=; b=d+mMLKCqXqt3mI5PSTKk+UnTyHtqgzpibupytj4u/XWattx/Ai0BSPs/abSRmiQq1u 0c0iyctoE9Ge28Z9QY40h0aYdMBikZRh7APtS6IdeMM9rnjP77cF3kl0FVLUEZmUEid7 EqcvjnHyNNuhIbVfucrwYfYszbkPKWGUfq4z3pPdgQ2TSXSvF4On00nUka26KjzsXTIG ghnZdvlnNvzO9OjSzr9H7BbXAhC3ZVSu77MvLGGY6XVYbDfBg3cD84LbGeifkyjRBc5j Oleeq918kdeg55CL+9BW5sWQfPCF/tf/B3LkttzoNMB80tKL47SUtda1PcDgRwbGevAF YzpQ== X-Gm-Message-State: AJIora/5S5MeSDaXOiyIfWG7yWw0pRTU4Akz4Yt/ZPdKPJrC1d7MxwlM 9TOKsi20Mor8RLIL9zABTW3XEcO7NlNL/8pTk2Er0A== X-Received: by 2002:a65:6b8a:0:b0:3db:7dc5:fec2 with SMTP id d10-20020a656b8a000000b003db7dc5fec2mr8746604pgw.223.1656616463900; Thu, 30 Jun 2022 12:14:23 -0700 (PDT) MIME-Version: 1.0 References: <20220519153713.819591-1-chao.p.peng@linux.intel.com> <20220519153713.819591-7-chao.p.peng@linux.intel.com> <20220624090246.GA2181919@chaop.bj.intel.com> In-Reply-To: <20220624090246.GA2181919@chaop.bj.intel.com> From: Vishal Annapurve Date: Thu, 30 Jun 2022 12:14:13 -0700 Message-ID: Subject: Re: [PATCH v6 6/8] KVM: Handle page fault for private memory To: Chao Peng Cc: "Nikunj A. Dadhania" , kvm list , LKML , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86 , "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Yu Zhang , "Kirill A . Shutemov" , Andy Lutomirski , Jun Nakajima , Dave Hansen , Andi Kleen , David Hildenbrand , aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ... > > > /* > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > > index afe18d70ece7..e18460e0d743 100644 > > > --- a/arch/x86/kvm/mmu/mmu.c > > > +++ b/arch/x86/kvm/mmu/mmu.c > > > @@ -2899,6 +2899,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, > > > if (max_level == PG_LEVEL_4K) > > > return PG_LEVEL_4K; > > > > > > + if (kvm_slot_is_private(slot)) > > > + return max_level; > > > > Can you explain the rationale behind the above change? > > AFAIU, this overrides the transparent_hugepage=never setting for both > > shared and private mappings. > > As Sean pointed out, this should check against fault->is_private instead > of the slot. For private fault, the level is retrieved and stored to > fault->max_level in kvm_faultin_pfn_private() instead of here. > > For shared fault, it will continue to query host_level below. For > private fault, the host level has already been accounted in > kvm_faultin_pfn_private(). > > Chao > > With transparent_hugepages=always setting I see issues with the current implementation. Scenario: 1) Guest accesses a gfn range 0x800-0xa00 as private 2) Guest calls mapgpa to convert the range 0x84d-0x86e as shared 3) Guest tries to access recently converted memory as shared for the first time Guest VM shutdown is observed after step 3 -> Guest is unable to proceed further since somehow code section is not as expected Corresponding KVM trace logs after step 3: VCPU-0-61883 [078] ..... 72276.115679: kvm_page_fault: address 84d000 error_code 4 VCPU-0-61883 [078] ..... 72276.127005: kvm_mmu_spte_requested: gfn 84d pfn 100b4a4d level 2 VCPU-0-61883 [078] ..... 72276.127008: kvm_tdp_mmu_spte_changed: as id 0 gfn 800 level 2 old_spte 100b1b16827 new_spte 100b4a00ea7 VCPU-0-61883 [078] ..... 72276.127009: kvm_mmu_prepare_zap_page: sp gen 0 gfn 800 l1 8-byte q0 direct wux nxe ad root 0 sync VCPU-0-61883 [078] ..... 72276.127009: kvm_tdp_mmu_spte_changed: as id 0 gfn 800 level 1 old_spte 1003eb27e67 new_spte 5a0 VCPU-0-61883 [078] ..... 72276.127010: kvm_tdp_mmu_spte_changed: as id 0 gfn 801 level 1 old_spte 10056cc8e67 new_spte 5a0 VCPU-0-61883 [078] ..... 72276.127010: kvm_tdp_mmu_spte_changed: as id 0 gfn 802 level 1 old_spte 10056fa2e67 new_spte 5a0 VCPU-0-61883 [078] ..... 72276.127010: kvm_tdp_mmu_spte_changed: as id 0 gfn 803 level 1 old_spte 0 new_spte 5a0 .... VCPU-0-61883 [078] ..... 72276.127089: kvm_tdp_mmu_spte_changed: as id 0 gfn 9ff level 1 old_spte 100a43f4e67 new_spte 5a0 VCPU-0-61883 [078] ..... 72276.127090: kvm_mmu_set_spte: gfn 800 spte 100b4a00ea7 (rwxu) level 2 at 10052fa5020 VCPU-0-61883 [078] ..... 72276.127091: kvm_fpu: unload Looks like with transparent huge pages enabled kvm tried to handle the shared memory fault on 0x84d gfn by coalescing nearby 4K pages to form a contiguous 2MB page mapping at gfn 0x800, since level 2 was requested in kvm_mmu_spte_requested. This caused the private memory contents from regions 0x800-0x84c and 0x86e-0xa00 to get unmapped from the guest leading to guest vm shutdown. Does getting the mapping level as per the fault access type help address the above issue? Any such coalescing should not cross between private to shared or shared to private memory regions. > > > host_level = host_pfn_mapping_level(kvm, gfn, pfn, slot); > > > return min(host_level, max_level); > > > } > > Regards, Vishal