Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2847372imu; Sun, 6 Jan 2019 11:26:30 -0800 (PST) X-Google-Smtp-Source: ALg8bN6QdXIiXkUzA7Nw7w57DPDT5+g7ysF64zd1E2VKG7Y+Yn9b96m/N4YGg0oBdoz2Y+dw5bsL X-Received: by 2002:a63:42c1:: with SMTP id p184mr8482164pga.202.1546802790326; Sun, 06 Jan 2019 11:26:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1546802790; cv=none; d=google.com; s=arc-20160816; b=SrEvp1kDRWUWaiORbuedPc1gdlbIwXK4NXhnHHr5ROLbJKXArpavk553Dss+GDNTFP 1TG2vCxuZSO7Dxsb8ui47iO/vNUhH+doF9g7hagJJgxWh9LQXdK6twU0npBOprcSV4F4 9QrQ8lkGC2R9NJRtcJdqiYaF5gchsZzI9VyNl+RLK+4fNPAxPsQ+37fpmelMYgRoxJkI 2rg5IJAip1a24qNlxFsmVWMpziBQCKLZ6VGsg/zYuj8MPELpBSvTNjfOySMrV415fmVI RLQ/fOmsKdSSVovFLXSBRSd52fAHqxlF5pQSraWVRvLSDvUR4gA8M9xuu7BPw7A6Y9rg Kc9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=JVFrpeC80OLPbUMIpkWyZ48VgZSdWjNoszlgtYxl1bA=; b=pevR6SyVoK9BmDOE/6xYK38c26H304Vt8Hlap+Rv+NKnyRa7w7JoZXHnSN4Ldo6yas sQ0aqeFEbFo1+/yDSipAak1OeDugZcdzREonoCGIppVHdImq2CNyQAChEUyTzyMuN2U3 i8lkae5L1OCP+AnQDciuSFbqOkpMXbuozNOemzHqtUwgoHS8HrPXSHyanUWzz3zzG5hJ xgpRA9Kde+BRCUUCxEpwH2Dq6eEd7h4vjFegVyOYsQx8Bs7aQXrV0HeiARO60pjNry0S FjlqDlTuODwvu5TU+zgm6P33As6/whFRCRgcnxJJcKEuNR+ARIjCBpWqMboXA24V7y1e Tgaw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@mena-vt-edu.20150623.gappssmtp.com header.s=20150623 header.b="KaBI2q3/"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vt.edu Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n33si908230pgl.336.2019.01.06.11.26.15; Sun, 06 Jan 2019 11:26:30 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@mena-vt-edu.20150623.gappssmtp.com header.s=20150623 header.b="KaBI2q3/"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vt.edu Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726246AbfAFTZH (ORCPT + 99 others); Sun, 6 Jan 2019 14:25:07 -0500 Received: from mail-ed1-f68.google.com ([209.85.208.68]:46428 "EHLO mail-ed1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726113AbfAFTZF (ORCPT ); Sun, 6 Jan 2019 14:25:05 -0500 Received: by mail-ed1-f68.google.com with SMTP id o10so35988534edt.13 for ; Sun, 06 Jan 2019 11:25:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mena-vt-edu.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JVFrpeC80OLPbUMIpkWyZ48VgZSdWjNoszlgtYxl1bA=; b=KaBI2q3/uBexAYFXmYVxNaOky7+KEIjhnMbF1ZRYNfRjP/l8JnG7ygwokL3ryoTYoo XSxmyjc3BtvsQNGIWNOVU78x6GlfO0moFUkKCVjxfD49oBJwKOLkrbhiIZyNLx4SjcsA 4vG5pOvf8Mh3eH6vcLRe9WOKRmeuVecU3fwHs7jfdX+HknovyWx8y4C9HrS0jM4eQchE xvZPocrgUdtouud2tyHLLtf9dyJ/F9GysEFtA02Z++qjylHakq4j4xXVvColyOaAQZ9D f+ObvPu5sNzF/IdeynUaD1GF+ZROMxUvc/bZyU7Z2f/Ai1NN27URH4/5qhMOd7wv/mEg dzFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JVFrpeC80OLPbUMIpkWyZ48VgZSdWjNoszlgtYxl1bA=; b=szqDCJm9848AsoPfQG9FcQhFtTaACRFK2bEMYW6cgf6sUiiWelqxFAV0IyWYzIM23J GkNPZYKMSDuH/UbBnIiWzcEkYzlw3hDVOp2WHJZJNOsJd0zy5Hk4Z2b7dXxLfD1xLeO5 q1OQglLEaU7wb4/pzSSLvHnzGqpswmUpIftqvV9io2e7XfT/vTMXmVUp/x0UW9q0ZKVu yW11Q2DV4VozcRZF71vW+ml9VJh+j9Pklhv+sFg9wBQZAlKX++PsgEgmMnmCDnSS4S2D YOIr4Kaar+jPQymrmqZ5n97LcxKH5ZUmW/4RE8tPT2Bl2cJ0jUKQSlW0nmWqM3/o1EHt DjSQ== X-Gm-Message-State: AA+aEWZazHYzyJO7CsoY7ZUxB4WogB83JTAP+5oQrUfsx6fPlYqvJ8Ut 32n2pr1mO3NHekZscuVfTF9y0w== X-Received: by 2002:a17:906:3e47:: with SMTP id t7-v6mr44368793eji.201.1546802703999; Sun, 06 Jan 2019 11:25:03 -0800 (PST) Received: from localhost.localdomain ([156.212.65.252]) by smtp.gmail.com with ESMTPSA id b46sm29994035edd.94.2019.01.06.11.24.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 Jan 2019 11:25:03 -0800 (PST) From: Ahmed Abd El Mawgood To: Paolo Bonzini , rkrcmar@redhat.com, Jonathan Corbet , Thomas Gleixner , Ingo Molnar , Borislav Petkov , hpa@zytor.com, x86@kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, ahmedsoliman0x666@gmail.com, ovich00@gmail.com, kernel-hardening@lists.openwall.com, nigel.edwards@hpe.com, Boris Lukashev , Igor Stoppa Cc: Ahmed Abd El Mawgood Subject: [PATCH V8 02/11] KVM: X86: Add arbitrary data pointer in kvm memslot iterator functions Date: Sun, 6 Jan 2019 21:23:36 +0200 Message-Id: <20190106192345.13578-3-ahmedsoliman@mena.vt.edu> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190106192345.13578-1-ahmedsoliman@mena.vt.edu> References: <20190106192345.13578-1-ahmedsoliman@mena.vt.edu> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This will help sharing data into the slot_level_handler callback. In my case I need to a share a counter for the pages traversed to use it in some bitmap. Being able to send arbitrary memory pointer into the slot_level_handler callback made it easy. Signed-off-by: Ahmed Abd El Mawgood --- arch/x86/kvm/mmu.c | 65 ++++++++++++++++++++++++++-------------------- 1 file changed, 37 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index ce770b4462..098df7d135 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1525,7 +1525,7 @@ static bool spte_write_protect(u64 *sptep, bool pt_protect) static bool __rmap_write_protect(struct kvm *kvm, struct kvm_rmap_head *rmap_head, - bool pt_protect) + bool pt_protect, void *data) { u64 *sptep; struct rmap_iterator iter; @@ -1564,7 +1564,8 @@ static bool wrprot_ad_disabled_spte(u64 *sptep) * - W bit on ad-disabled SPTEs. * Returns true iff any D or W bits were cleared. */ -static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head) +static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + void *data) { u64 *sptep; struct rmap_iterator iter; @@ -1590,7 +1591,8 @@ static bool spte_set_dirty(u64 *sptep) return mmu_spte_update(sptep, spte); } -static bool __rmap_set_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head) +static bool __rmap_set_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + void *data) { u64 *sptep; struct rmap_iterator iter; @@ -1622,7 +1624,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PT_PAGE_TABLE_LEVEL, slot); - __rmap_write_protect(kvm, rmap_head, false); + __rmap_write_protect(kvm, rmap_head, false, NULL); /* clear the first set bit */ mask &= mask - 1; @@ -1648,7 +1650,7 @@ void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PT_PAGE_TABLE_LEVEL, slot); - __rmap_clear_dirty(kvm, rmap_head); + __rmap_clear_dirty(kvm, rmap_head, NULL); /* clear the first set bit */ mask &= mask - 1; @@ -1701,7 +1703,8 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, for (i = PT_PAGE_TABLE_LEVEL; i <= PT_MAX_HUGEPAGE_LEVEL; ++i) { rmap_head = __gfn_to_rmap(gfn, i, slot); - write_protected |= __rmap_write_protect(kvm, rmap_head, true); + write_protected |= __rmap_write_protect(kvm, rmap_head, true, + NULL); } return write_protected; @@ -1715,7 +1718,8 @@ static bool rmap_write_protect(struct kvm_vcpu *vcpu, u64 gfn) return kvm_mmu_slot_gfn_write_protect(vcpu->kvm, slot, gfn); } -static bool kvm_zap_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head) +static bool kvm_zap_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + void *data) { u64 *sptep; struct rmap_iterator iter; @@ -1735,7 +1739,7 @@ static int kvm_unmap_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, struct kvm_memory_slot *slot, gfn_t gfn, int level, unsigned long data) { - return kvm_zap_rmapp(kvm, rmap_head); + return kvm_zap_rmapp(kvm, rmap_head, NULL); } static int kvm_set_pte_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, @@ -5552,13 +5556,15 @@ void kvm_mmu_uninit_vm(struct kvm *kvm) } /* The return value indicates if tlb flush on all vcpus is needed. */ -typedef bool (*slot_level_handler) (struct kvm *kvm, struct kvm_rmap_head *rmap_head); +typedef bool (*slot_level_handler) (struct kvm *kvm, + struct kvm_rmap_head *rmap_head, void *data); /* The caller should hold mmu-lock before calling this function. */ static __always_inline bool slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot, slot_level_handler fn, int start_level, int end_level, - gfn_t start_gfn, gfn_t end_gfn, bool lock_flush_tlb) + gfn_t start_gfn, gfn_t end_gfn, bool lock_flush_tlb, + void *data) { struct slot_rmap_walk_iterator iterator; bool flush = false; @@ -5566,7 +5572,7 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot, for_each_slot_rmap_range(memslot, start_level, end_level, start_gfn, end_gfn, &iterator) { if (iterator.rmap) - flush |= fn(kvm, iterator.rmap); + flush |= fn(kvm, iterator.rmap, data); if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { if (flush && lock_flush_tlb) { @@ -5588,36 +5594,36 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot, static __always_inline bool slot_handle_level(struct kvm *kvm, struct kvm_memory_slot *memslot, slot_level_handler fn, int start_level, int end_level, - bool lock_flush_tlb) + bool lock_flush_tlb, void *data) { return slot_handle_level_range(kvm, memslot, fn, start_level, end_level, memslot->base_gfn, memslot->base_gfn + memslot->npages - 1, - lock_flush_tlb); + lock_flush_tlb, data); } static __always_inline bool slot_handle_all_level(struct kvm *kvm, struct kvm_memory_slot *memslot, - slot_level_handler fn, bool lock_flush_tlb) + slot_level_handler fn, bool lock_flush_tlb, void *data) { return slot_handle_level(kvm, memslot, fn, PT_PAGE_TABLE_LEVEL, - PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb); + PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb, data); } static __always_inline bool slot_handle_large_level(struct kvm *kvm, struct kvm_memory_slot *memslot, - slot_level_handler fn, bool lock_flush_tlb) + slot_level_handler fn, bool lock_flush_tlb, void *data) { return slot_handle_level(kvm, memslot, fn, PT_PAGE_TABLE_LEVEL + 1, - PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb); + PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb, data); } static __always_inline bool slot_handle_leaf(struct kvm *kvm, struct kvm_memory_slot *memslot, - slot_level_handler fn, bool lock_flush_tlb) + slot_level_handler fn, bool lock_flush_tlb, void *data) { return slot_handle_level(kvm, memslot, fn, PT_PAGE_TABLE_LEVEL, - PT_PAGE_TABLE_LEVEL, lock_flush_tlb); + PT_PAGE_TABLE_LEVEL, lock_flush_tlb, data); } void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) @@ -5645,7 +5651,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) flush |= slot_handle_level_range(kvm, memslot, kvm_zap_rmapp, PT_PAGE_TABLE_LEVEL, PT_MAX_HUGEPAGE_LEVEL, start, - end - 1, flush_tlb); + end - 1, flush_tlb, NULL); } } @@ -5657,9 +5663,10 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) } static bool slot_rmap_write_protect(struct kvm *kvm, - struct kvm_rmap_head *rmap_head) + struct kvm_rmap_head *rmap_head, + void *data) { - return __rmap_write_protect(kvm, rmap_head, false); + return __rmap_write_protect(kvm, rmap_head, false, data); } void kvm_mmu_slot_remove_write_access(struct kvm *kvm, @@ -5669,7 +5676,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, spin_lock(&kvm->mmu_lock); flush = slot_handle_all_level(kvm, memslot, slot_rmap_write_protect, - false); + false, NULL); spin_unlock(&kvm->mmu_lock); /* @@ -5696,7 +5703,8 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, } static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, - struct kvm_rmap_head *rmap_head) + struct kvm_rmap_head *rmap_head, + void *data) { u64 *sptep; struct rmap_iterator iter; @@ -5740,7 +5748,7 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, /* FIXME: const-ify all uses of struct kvm_memory_slot. */ spin_lock(&kvm->mmu_lock); slot_handle_leaf(kvm, (struct kvm_memory_slot *)memslot, - kvm_mmu_zap_collapsible_spte, true); + kvm_mmu_zap_collapsible_spte, true, NULL); spin_unlock(&kvm->mmu_lock); } @@ -5750,7 +5758,7 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, bool flush; spin_lock(&kvm->mmu_lock); - flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, false); + flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, false, NULL); spin_unlock(&kvm->mmu_lock); lockdep_assert_held(&kvm->slots_lock); @@ -5774,7 +5782,7 @@ void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm, spin_lock(&kvm->mmu_lock); flush = slot_handle_large_level(kvm, memslot, slot_rmap_write_protect, - false); + false, NULL); spin_unlock(&kvm->mmu_lock); /* see kvm_mmu_slot_remove_write_access */ @@ -5792,7 +5800,8 @@ void kvm_mmu_slot_set_dirty(struct kvm *kvm, bool flush; spin_lock(&kvm->mmu_lock); - flush = slot_handle_all_level(kvm, memslot, __rmap_set_dirty, false); + flush = slot_handle_all_level(kvm, memslot, __rmap_set_dirty, false, + NULL); spin_unlock(&kvm->mmu_lock); lockdep_assert_held(&kvm->slots_lock); -- 2.19.2