Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp505328ybt; Wed, 8 Jul 2020 05:18:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJywSI3QdBnpQxRHyb341qbJk3H0NNwOySh8dXimp8fMDU/D+O+MWMeZLDROKOmvSEj0tFa5 X-Received: by 2002:a17:906:31c8:: with SMTP id f8mr39627013ejf.269.1594210713488; Wed, 08 Jul 2020 05:18:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594210713; cv=none; d=google.com; s=arc-20160816; b=G4RGdvzc3jrzRTcjsqEorH3cagz+Ny0FcHootGBPx0zV+kvvxjGvyPiNVQySoS9WxY vqtowMLhA0TNFbdXJswF2dbaz2aOA4aP/76uamXP4DpsasyoWb5pK2kZInrYguMy02LC 5BgqCFzLrdIqUAmAFa+6oNy9aaLyAwg76rNOAGIdeQHXi7nRgKWrB3TcOn5BZksIoK40 PPqo1mUYSjmPDKwNhLd+y3Y5HoubvMDv9cHKdZn8u1fwEYIrWL4P5npjhI8HSG6CJxAQ RWW703miE5SKGJJoVVp/z2RDUM6CcH1XVEukBzsyoOiKXni+w5w7cbKvw2f2IAKOYPAL hwCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=9M4svuM0EwWd4PXLEUVAJBeAJ1S06ljhK2rnuCYixYU=; b=cmtY+WDy8S8IS3n8kTrk0kxru2JbrBho+U4G3OpDoLCqBiv0K6YsJQSA31M1A6MKCH k4HL2wtFmGCnkkwoPg2GqO8SNpfx3fLFV3h+JKOSJqEB3i4HTrOWd+WCBSngxicEvFBn aPzqkUGirHq0sU39zw9B0dvQeo5s4tkN+yxNBI+scEkFqIYAtb3CrwM73VKe2L2FlXnp kxeIZ1Xw3RFMMeoQd86n26yh0fjk6SkjwEJEvuiR0sB9JW3pg/ZtlKUVeu7gK0IK70Za jKMknAm2i7nUIwk3ZXQY8qbVR5WPrZsvxiZu6MEAll0psK9TLhhmxdBe/ZjjnO4DdxxN cZUA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f21si17166524ejl.329.2020.07.08.05.18.10; Wed, 08 Jul 2020 05:18:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729012AbgGHMRE (ORCPT + 99 others); Wed, 8 Jul 2020 08:17:04 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:38138 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728790AbgGHMRE (ORCPT ); Wed, 8 Jul 2020 08:17:04 -0400 Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 068C0eio107900; Wed, 8 Jul 2020 08:16:42 -0400 Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0b-001b2d01.pphosted.com with ESMTP id 325dp70r9k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 08 Jul 2020 08:16:41 -0400 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 068CAH8u002721; Wed, 8 Jul 2020 12:16:40 GMT Received: from b06cxnps3074.portsmouth.uk.ibm.com (d06relay09.portsmouth.uk.ibm.com [9.149.109.194]) by ppma04ams.nl.ibm.com with ESMTP id 322hd7vhgy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 08 Jul 2020 12:16:40 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 068CGbaI36896792 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 8 Jul 2020 12:16:37 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 41927AE055; Wed, 8 Jul 2020 12:16:37 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id BA336AE045; Wed, 8 Jul 2020 12:16:36 +0000 (GMT) Received: from pomme.local (unknown [9.145.179.233]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 8 Jul 2020 12:16:36 +0000 (GMT) Subject: Re: [PATCH 2/2] KVM: PPC: Book3S HV: rework secure mem slot dropping To: bharata@linux.ibm.com Cc: linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, mpe@ellerman.id.au, paulus@samba.org, bauerman@linux.ibm.com, sukadev@linux.ibm.com, sathnaga@linux.vnet.ibm.com, Ram Pai , Paul Mackerras References: <20200703155914.40262-1-ldufour@linux.ibm.com> <20200703155914.40262-3-ldufour@linux.ibm.com> <20200708112531.GA7902@in.ibm.com> From: Laurent Dufour Message-ID: <0588d16a-8548-0f55-1132-400807a390a1@linux.ibm.com> Date: Wed, 8 Jul 2020 14:16:36 +0200 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20200708112531.GA7902@in.ibm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-08_08:2020-07-08,2020-07-08 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 cotscore=-2147483648 phishscore=0 impostorscore=0 priorityscore=1501 mlxscore=0 lowpriorityscore=0 malwarescore=0 suspectscore=3 bulkscore=0 spamscore=0 mlxlogscore=779 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2007080084 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Le 08/07/2020 à 13:25, Bharata B Rao a écrit : > On Fri, Jul 03, 2020 at 05:59:14PM +0200, Laurent Dufour wrote: >> When a secure memslot is dropped, all the pages backed in the secure device >> (aka really backed by secure memory by the Ultravisor) should be paged out >> to a normal page. Previously, this was achieved by triggering the page >> fault mechanism which is calling kvmppc_svm_page_out() on each pages. >> >> This can't work when hot unplugging a memory slot because the memory slot >> is flagged as invalid and gfn_to_pfn() is then not trying to access the >> page, so the page fault mechanism is not triggered. >> >> Since the final goal is to make a call to kvmppc_svm_page_out() it seems >> simpler to directly calling it instead of triggering such a mechanism. This >> way kvmppc_uvmem_drop_pages() can be called even when hot unplugging a >> memslot. > > Yes, this appears much simpler. Thanks Bharata for reviewing this. > >> >> Since kvmppc_uvmem_drop_pages() is already holding kvm->arch.uvmem_lock, >> the call to __kvmppc_svm_page_out() is made. >> As __kvmppc_svm_page_out needs the vma pointer to migrate the pages, the >> VMA is fetched in a lazy way, to not trigger find_vma() all the time. In >> addition, the mmap_sem is help in read mode during that time, not in write >> mode since the virual memory layout is not impacted, and >> kvm->arch.uvmem_lock prevents concurrent operation on the secure device. >> >> Cc: Ram Pai >> Cc: Bharata B Rao >> Cc: Paul Mackerras >> Signed-off-by: Laurent Dufour >> --- >> arch/powerpc/kvm/book3s_hv_uvmem.c | 54 ++++++++++++++++++++---------- >> 1 file changed, 37 insertions(+), 17 deletions(-) >> >> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c >> index 852cc9ae6a0b..479ddf16d18c 100644 >> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c >> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c >> @@ -533,35 +533,55 @@ static inline int kvmppc_svm_page_out(struct vm_area_struct *vma, >> * fault on them, do fault time migration to replace the device PTEs in >> * QEMU page table with normal PTEs from newly allocated pages. >> */ >> -void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free, >> +void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *slot, >> struct kvm *kvm, bool skip_page_out) >> { >> int i; >> struct kvmppc_uvmem_page_pvt *pvt; >> - unsigned long pfn, uvmem_pfn; >> - unsigned long gfn = free->base_gfn; >> + struct page *uvmem_page; >> + struct vm_area_struct *vma = NULL; >> + unsigned long uvmem_pfn, gfn; >> + unsigned long addr, end; >> + >> + down_read(&kvm->mm->mmap_sem); > > You should be using mmap_read_lock(kvm->mm) with recent kernels. Absolutely, shame on me, I reviewed Michel's series about that! Paul, Michael, could you fix that when pulling this patch or should I sent a whole new series? > >> + >> + addr = slot->userspace_addr; >> + end = addr + (slot->npages * PAGE_SIZE); >> >> - for (i = free->npages; i; --i, ++gfn) { >> - struct page *uvmem_page; >> + gfn = slot->base_gfn; >> + for (i = slot->npages; i; --i, ++gfn, addr += PAGE_SIZE) { >> + >> + /* Fetch the VMA if addr is not in the latest fetched one */ >> + if (!vma || (addr < vma->vm_start || addr >= vma->vm_end)) { >> + vma = find_vma_intersection(kvm->mm, addr, end); >> + if (!vma || >> + vma->vm_start > addr || vma->vm_end < end) { >> + pr_err("Can't find VMA for gfn:0x%lx\n", gfn); >> + break; >> + } >> + } > > The first find_vma_intersection() was called for the range spanning the > entire memslot, but you have code to check if vma remains valid for the > new addr in each iteration. Guess you wanted to get vma for one page at > a time and use it for subsequent pages until it covers the range? That's the goal, fetch the VMA once and no more until we reach its end boundary.