Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BA5DC433EF for ; Fri, 12 Nov 2021 01:32:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5236061284 for ; Fri, 12 Nov 2021 01:32:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234255AbhKLBfi (ORCPT ); Thu, 11 Nov 2021 20:35:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230308AbhKLBfg (ORCPT ); Thu, 11 Nov 2021 20:35:36 -0500 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02CD7C061766 for ; Thu, 11 Nov 2021 17:32:46 -0800 (PST) Received: by mail-pj1-x102c.google.com with SMTP id np3so5434395pjb.4 for ; Thu, 11 Nov 2021 17:32:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=AL90QSnlPxXv5tLufZ1zlq7KjJ0D5BExWrjps1t+29s=; b=bxpxOXkH1/gRoN3q2sIlqLWqnDgG6G2HCxNOF6DiM0eebPtdUrVaLZdwTKm30XXIKf pciYCCAraDj6+N5ub88aiEU9Sqc+gEEP0tH18UnSplfqeJqpp5zQ3lSTaSygRoGg4fg+ tll+ejzFsotRKGLbtXHvALciCIt5yeW9ml8k89oDq6DTTLGsUACfArDWOsNIoY0iXp96 kikQ0fngC+TPH3Kllv27OAwe5xRNf7EWJvcamUReZttXO7TYV+21eZVF7m1gQCC7oZ3B biwzTph5R1rTPbnijCWDmekT3qmBDk8AlH8GBjflmk5suuX4JZkxLi8aoALrZ9ZHJtW7 R2tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=AL90QSnlPxXv5tLufZ1zlq7KjJ0D5BExWrjps1t+29s=; b=4Gtf6YdGuRLg+FWlGnkqmhYokKiV5uQESTI0xa8ZJ1apKpQkUyI9DMynGnsxs2G+DZ gqDeovM2hLVHnMAfG02HpsM2A83pTZ49yTZH/cfp1uC9Eh1RmAo7qzweNBiPf8fcFVq8 nLNZe4/xH/mjMc/gCLBZ2XfB+XIcWee/tDgXRTNZeg7wrae9UvhOcNuvuoQYER7jow0T RSixGMFz/XBewAQRGXbujyMEu97J6ezIr6H09QD00kt1eczaINMVxBt0dT5QLkVRZ5mo fbZtDFSnuiKsYlO3421iu+aWJHjKhVG5zShqYhWw8rXaBnjzAh9s0F/obYJjookT9p7a CqYw== X-Gm-Message-State: AOAM532LesNjn0CP4X/I/b06qSMNpNeS4ybiFMsqqJK/ptWXNr2GQCja lPEfL+nZGuWZ/YYG+iDihz+YAg== X-Google-Smtp-Source: ABdhPJy7r4nQtcrRS0vLIHiQ7JwTdkVyJqAYi43992L2uT2O4NHn7Evm+9JHhEIz/lSq4u0UiWyh1A== X-Received: by 2002:a17:90a:17a5:: with SMTP id q34mr31238939pja.122.1636680766291; Thu, 11 Nov 2021 17:32:46 -0800 (PST) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id p23sm9227849pjg.55.2021.11.11.17.32.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Nov 2021 17:32:45 -0800 (PST) Date: Fri, 12 Nov 2021 01:32:41 +0000 From: Sean Christopherson To: "Maciej S. Szmigiero" Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Ben Gardon , Huacai Chen , Marc Zyngier , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Subject: Re: [PATCH v5.5 30/30] KVM: Dynamically allocate "new" memslots from the get-go Message-ID: References: <20211104002531.1176691-1-seanjc@google.com> <20211104002531.1176691-31-seanjc@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Nov 12, 2021, Maciej S. Szmigiero wrote: > On 04.11.2021 01:25, Sean Christopherson wrote: > > Allocate the "new" memslot for !DELETE memslot updates straight away > > instead of filling an intermediate on-stack object and forcing > > kvm_set_memslot() to juggle the allocation and do weird things like reuse > > the old memslot object in MOVE. > > > > In the MOVE case, this results in an "extra" memslot allocation due to > > allocating both the "new" slot and the "invalid" slot, but that's a > > temporary and not-huge allocation, and MOVE is a relatively rare memslot > > operation. > > > > Regarding MOVE, drop the open-coded management of the gfn tree with a > > call to kvm_replace_memslot(), which already handles the case where > > new->base_gfn != old->base_gfn. This is made possible by virtue of not > > having to copy the "new" memslot data after erasing the old memslot from > > the gfn tree. Using kvm_replace_memslot(), and more specifically not > > reusing the old memslot, means the MOVE case now does hva tree and hash > > list updates, but that's a small price to pay for simplifying the code > > and making MOVE align with all the other flavors of updates. The "extra" > > updates are firmly in the noise from a performance perspective, e.g. the > > "move (in)active area" selfttests show a (very, very) slight improvement. > > > > Signed-off-by: Sean Christopherson > > Reviewed-by: Maciej S. Szmigiero > > For a new patch set version when the "main" commit is rewritten anyway > (I mean the one titled "Keep memslots in tree-based structures instead of > array-based ones") it makes sense to integrate changes like these into > such modified main commit. > > This way a full algorithm / logic check for all the supported memslot > operations needs to be done only once instead of having to be done > multiple times for all these intermediate forms of the code (as this is > a quite time-consuming job to do properly). > > I think it only makes sense to separate non-functional changes (like > renaming of variables, comment rewording, open-coding a helper, etc.) > into their own patches for ease of reviewing. I agree that validating intermediate stages is time-consuming and can be frustrating, but that doesn't diminish the value of intermediate patches. I do tend to lean too far towards slicing and dicing, but I am quite confident that I've come out ahead in terms of time spent validating smaller patches versus time saved because bisection could pinpoint the exact problem. E.g. in this patch, arch code can now see a NULL @new. That's _supposed_ to be a non-functional change, but it would be all too easy to have missed a path in the prep work where an arch accesses @new without first checking it for NULL (or DELETE). If such a bug were to escape review, then bisection would point at this patch, not the mega patch that completely reworked the core memslots behavior. And IIRC, I actually botched the prior "bitter end" patch and initially missed a new.npages => npages conversion. Again, no functional change _intended_, but one of the main reasons for doing small(er) intermediate patches is precisely so that any unintended behavior stands out and is easier to debug/triage. > Or if the main commit was unchanged from the last reviewed version so > actual changes in the new version will stand out. > > Thanks, > Maciej