Received: by 2002:a05:7412:2a8c:b0:e2:908c:2ebd with SMTP id u12csp2152524rdh; Tue, 26 Sep 2023 14:20:19 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGzUntSaGamMkbJScM74NjlyhL4mCc5IJPwXUSS2grIDG4tnpgM3Gdiyh0VZ157g7ndoRr4 X-Received: by 2002:a05:6a00:1956:b0:68e:3f0b:5e6f with SMTP id s22-20020a056a00195600b0068e3f0b5e6fmr194636pfk.24.1695763219561; Tue, 26 Sep 2023 14:20:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695763219; cv=none; d=google.com; s=arc-20160816; b=ppEGhFENvMKsI0NxIgbOxjWSbEsXLZndx0YqQw348lNDGI01hPr4raKzS8J3T3vwOV fj7jbbHZOqRxlEPjZMBpIXJ3YLLsbhxYwQIxrWMxPryaRhlZn7NqkWE9fDwzqFxiAfO9 ioWtVjIwL9UsPdZRRGqZvYhFjF2+LXOvjBgDnCs1Ua6l+njlOgIuHV552GE9IAWGsgGP ZktBuzI83SBy/ewwn7khPVSl57CK6FkqV1bf5Mk6yE6omjpErqGgBRB26g7GEInqp/f/ lwGKcs/MMdeG5P6AkuZ/OrKpXrEphM5sjYYMuiQYwydhm1gz7YuuGBVUjwXD+LOuaGc9 JA7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=pN5XEdTDAt4gJ2Hxd3yXT0OE9gqib7UzrEcqJabbqQg=; fh=qpQuS5DB8/tFQfmdgX1i3QvppRzkrzhfKx2/rzpwOaE=; b=yTQz4kTbLwTk+ciE3AvTVhPdgbR7GnFGaZ/kK+NUsuKANZ0xBUyY3jZFZa8DIhisHY 46OO0W85Z1g5eL39w4Vxuj1Y+YFFZHyO5iU4LBznJ3ujQZ5abWy1PotAv1k4lfdw62z1 3s9NXRccuVGpMhR6jvVmdPl4mKTZUrZ8m1R2R22GtHaOaRYtxtLO7Fbw/oA2t1Rar5Fa uBNFwzgtLuR2TaTJniXS/Byg5oG/g/vEclkc4XqKPbcRLa9KWYqiWMIoBmOwjRAEQU79 v6taBFAXAnozCXdFYm/+tA+WpueZg4jrKEtEb7lhEIJ1Zi/Cva/eqRpKxtMWpksgAkVA 1Prw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=D1OgvBNM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id 63-20020a630142000000b00578a98c0687si14052948pgb.99.2023.09.26.14.20.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Sep 2023 14:20:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=D1OgvBNM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 230CF81D68F1; Tue, 26 Sep 2023 11:05:12 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235552AbjIZSEl (ORCPT + 99 others); Tue, 26 Sep 2023 14:04:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235544AbjIZSEj (ORCPT ); Tue, 26 Sep 2023 14:04:39 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8ABD10A; Tue, 26 Sep 2023 11:04:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695751472; x=1727287472; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=BPSD5WhvXpBO3lsIdE4c7syBr8J6jYNl4IAaRzob9YA=; b=D1OgvBNMHp1u53zOf1OONKlkZPkSEWYBGODnyQ/MKWso71k3HYY+KHMM 0AZKHUYau1kTDKXkyZFNk2g/Wv5IfBLKulqnArmJWpggLQHLyL83pOfAr BxBLbT9ghi9I3vi9zSfDs/IpH2ChXa69saEiAIj0eDMmOslR0p1ka83dp qg4K0ekduysrG4Hjp+j+4OCOyITmjAcKavn4Ig4N4XoXhm44X60lmKi6V kqsgdN7VaC4EbzN0pzSQGhZO+7YOyEFCsWeUuj8/VYiDHnMcvZXvDck6A piLYEy0RE7nCV0RAQULhOJ9BTtQknat67Z8Rl40NshMo1dWUhtbNlmu/O A==; X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="371967682" X-IronPort-AV: E=Sophos;i="6.03,178,1694761200"; d="scan'208";a="371967682" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2023 11:04:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="725514679" X-IronPort-AV: E=Sophos;i="6.03,178,1694761200"; d="scan'208";a="725514679" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2023 11:04:28 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Michael Roth , Paolo Bonzini , Sean Christopherson , linux-coco@lists.linux.dev, Chao Peng Subject: [PATCH] KVM: guest_memfd: Refactor kvm_gmem into inode->i_private Date: Tue, 26 Sep 2023 11:03:46 -0700 Message-Id: <8e57c347d6c461431e84ef4354dc076f363f3c01.1695751312.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Tue, 26 Sep 2023 11:05:12 -0700 (PDT) From: Isaku Yamahata Refactor guest_memfd to use inode->i_private to store info about kvm_gmem. Currently it is stored in the following way. - flags in inode->i_private - struct kvm_gmem in file->private_data - struct kvm_gmem in linked linst in inode->i_mapping->private_list And this list has single entry. The relationship between struct file, struct inode and struct kvm_gmem is 1:1, not 1:many. Consolidate related info in one place. - Move flags into struct kvm_gmem - Store struct kvm_gmem in inode->i_private - Don't use file->private_data - Don't use inode->i_mapping_private_list - Introduce a helper conversion function from inode to kvm_gmem Signed-off-by: Isaku Yamahata --- virt/kvm/guest_memfd.c | 53 ++++++++++++++++++------------------------ 1 file changed, 23 insertions(+), 30 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 4f3a313f5532..66dd9b55e85c 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -14,14 +14,19 @@ static struct vfsmount *kvm_gmem_mnt; struct kvm_gmem { struct kvm *kvm; struct xarray bindings; - struct list_head entry; + unsigned long flags; }; +static struct kvm_gmem *to_gmem(struct inode *inode) +{ + return inode->i_private; +} + static struct folio *kvm_gmem_get_huge_folio(struct inode *inode, pgoff_t index) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE unsigned long huge_index = round_down(index, HPAGE_PMD_NR); - unsigned long flags = (unsigned long)inode->i_private; + unsigned long flags = to_gmem(inode)->flags; struct address_space *mapping = inode->i_mapping; gfp_t gfp = mapping_gfp_mask(mapping); struct folio *folio; @@ -134,26 +139,22 @@ static void kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start, static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len) { - struct list_head *gmem_list = &inode->i_mapping->private_list; + struct address_space *mapping = inode->i_mapping; + struct kvm_gmem *gmem = to_gmem(inode); pgoff_t start = offset >> PAGE_SHIFT; pgoff_t end = (offset + len) >> PAGE_SHIFT; - struct kvm_gmem *gmem; /* * Bindings must stable across invalidation to ensure the start+end * are balanced. */ - filemap_invalidate_lock(inode->i_mapping); - - list_for_each_entry(gmem, gmem_list, entry) - kvm_gmem_invalidate_begin(gmem, start, end); + filemap_invalidate_lock(mapping); + kvm_gmem_invalidate_begin(gmem, start, end); truncate_inode_pages_range(inode->i_mapping, offset, offset + len - 1); - list_for_each_entry(gmem, gmem_list, entry) - kvm_gmem_invalidate_end(gmem, start, end); - - filemap_invalidate_unlock(inode->i_mapping); + kvm_gmem_invalidate_end(gmem, start, end); + filemap_invalidate_unlock(mapping); return 0; } @@ -231,7 +232,7 @@ static long kvm_gmem_fallocate(struct file *file, int mode, loff_t offset, static int kvm_gmem_release(struct inode *inode, struct file *file) { - struct kvm_gmem *gmem = file->private_data; + struct kvm_gmem *gmem = to_gmem(inode); struct kvm_memory_slot *slot; struct kvm *kvm = gmem->kvm; unsigned long index; @@ -260,8 +261,6 @@ static int kvm_gmem_release(struct inode *inode, struct file *file) kvm_gmem_invalidate_begin(gmem, 0, -1ul); kvm_gmem_invalidate_end(gmem, 0, -1ul); - list_del(&gmem->entry); - filemap_invalidate_unlock(inode->i_mapping); mutex_unlock(&kvm->slots_lock); @@ -305,8 +304,7 @@ static int kvm_gmem_migrate_folio(struct address_space *mapping, static int kvm_gmem_error_page(struct address_space *mapping, struct page *page) { - struct list_head *gmem_list = &mapping->private_list; - struct kvm_gmem *gmem; + struct kvm_gmem *gmem = to_gmem(mapping->host); pgoff_t start, end; filemap_invalidate_lock_shared(mapping); @@ -314,8 +312,7 @@ static int kvm_gmem_error_page(struct address_space *mapping, struct page *page) start = page->index; end = start + thp_nr_pages(page); - list_for_each_entry(gmem, gmem_list, entry) - kvm_gmem_invalidate_begin(gmem, start, end); + kvm_gmem_invalidate_begin(gmem, start, end); /* * Do not truncate the range, what action is taken in response to the @@ -326,8 +323,7 @@ static int kvm_gmem_error_page(struct address_space *mapping, struct page *page) * error to userspace. */ - list_for_each_entry(gmem, gmem_list, entry) - kvm_gmem_invalidate_end(gmem, start, end); + kvm_gmem_invalidate_end(gmem, start, end); filemap_invalidate_unlock_shared(mapping); @@ -382,7 +378,6 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags, if (err) goto err_inode; - inode->i_private = (void *)(unsigned long)flags; inode->i_op = &kvm_gmem_iops; inode->i_mapping->a_ops = &kvm_gmem_aops; inode->i_mode |= S_IFREG; @@ -417,10 +412,9 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags, kvm_get_kvm(kvm); gmem->kvm = kvm; xa_init(&gmem->bindings); + gmem->flags = flags; - file->private_data = gmem; - - list_add(&gmem->entry, &inode->i_mapping->private_list); + inode->i_private = gmem; fd_install(fd, file); return fd; @@ -476,12 +470,11 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, if (file->f_op != &kvm_gmem_fops) goto err; - gmem = file->private_data; + inode = file_inode(file); + gmem = to_gmem(inode); if (gmem->kvm != kvm) goto err; - inode = file_inode(file); - if (offset < 0 || !PAGE_ALIGNED(offset)) return -EINVAL; @@ -538,7 +531,7 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot) if (!file) return; - gmem = file->private_data; + gmem = to_gmem(file_inode(file)); filemap_invalidate_lock(file->f_mapping); xa_store_range(&gmem->bindings, start, end - 1, NULL, GFP_KERNEL); @@ -563,7 +556,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, if (!file) return -EFAULT; - gmem = file->private_data; + gmem = to_gmem(file_inode(file)); if (WARN_ON_ONCE(xa_load(&gmem->bindings, index) != slot)) { r = -EIO; -- 2.25.1