2020-12-11 12:18:17

by Yanan Wang

[permalink] [raw]
Subject: [RFC PATCH 0/1] Add prejudgement for relaxing permissions only case

Hi folks,

Found that in dirty-logging, or dirty-logging-stopped time, even normal running
time of a guest configed with huge mappings and numbers of vCPUs, translation
faults by different vCPUs on the same GPA could occur successively almost at the
same time. See below for the trace log, and there are two reasons to explain.

(1) If there are some vCPUs accessing the same GPA at the same time and the leaf
PTE is not set yet, then they will all cause translation faults and the first
vCPU holding mmu_lock will set valid leaf PTE, and the others will later choose
to update the leaf PTE or not.

(2) When changing a leaf entry or a table entry with break-before-make, if there
are some vCPUs accessing the same GPA just catch the moment when the target PTE
is set invalid in a BBM procedure coincidentally, they will all cause translation
faults and will later choose to update the leaf PTE or not.

The worst case can be like this: some vCPUs cause translation faults on the same
GPA with different prots, they will fight each other by changing back access
permissions of the PTE with break-before-make. And the BBM-invalid moment might
trigger more unnecessary translation faults. As a result, some useless small
loops will occur, which could lead to vCPU stuck. We have met the stuck
occasionally in guest migration and migration-stop time.

To avoid unnecessary update and small loops, add prejudgement in the translation
fault handler: Skip updating the valid leaf PTE if we are trying to recreate
exactly the same mapping or to reduce access permissions only(such as RW-->RO).
And update the valid leaf PTE without break-before-make if we are trying to add
more permissions only.

Yanan Wang (1):
KVM: arm64: Add prejudgement for relaxing permissions only case in
stage2 translation fault handler

arch/arm64/kvm/hyp/pgtable.c | 73 +++++++++++++++++++++++++-----------
1 file changed, 52 insertions(+), 21 deletions(-)

--
2.19.1

Trace log for a guest with 96 vCPUs and huge mappings by 1G.

*********************************************************************************
Recreating the same mappings and small loops in dirty-logging period.
*********************************************************************************
Recreating the same mappings:
CPU 94/KVM-8590 [094] ...2 82538.821614: user_mem_abort: logging_active 1, vcpu_id 94, f_ipa 0x83fffc000 , fault_status 0x4, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 94/KVM-8590 [094] ...2 82538.821615: stage2_map_walker_try_leaf_equal: addr 0x83fffc000 , level 3, old_pte 0x40002a7fffc7ff, new_pte 0x40002a7fffc7ff
CPU 55/KVM-8547 [055] ...2 82538.821618: user_mem_abort: logging_active 1, vcpu_id 55, f_ipa 0x83fffc000 , fault_status 0x4, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 55/KVM-8547 [055] ...2 82538.821619: stage2_map_walker_try_leaf_equal: addr 0x83fffc000 , level 3, old_pte 0x40002a7fffc7ff, new_pte 0x40002a7fffc7ff
CPU 78/KVM-8572 [078] ...2 82538.821620: user_mem_abort: logging_active 1, vcpu_id 78, f_ipa 0x83fffc000 , fault_status 0x4, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 78/KVM-8572 [078] ...2 82538.821622: stage2_map_walker_try_leaf_equal: addr 0x83fffc000 , level 3, old_pte 0x40002a7fffc7ff, new_pte 0x40002a7fffc7ff
CPU 59/KVM-8552 [059] ...2 82538.821624: user_mem_abort: logging_active 1, vcpu_id 59, f_ipa 0x83fffc000 , fault_status 0x4, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 59/KVM-8552 [059] ...2 82538.821625: stage2_map_walker_try_leaf_equal: addr 0x83fffc000 , level 3, old_pte 0x40002a7fffc7ff, new_pte 0x40002a7fffc7ff
CPU 57/KVM-8549 [057] ...2 82538.821626: user_mem_abort: logging_active 1, vcpu_id 57, f_ipa 0x83fffc000 , fault_status 0x4, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 57/KVM-8549 [057] ...2 82538.821626: stage2_map_walker_try_leaf_equal: addr 0x83fffc000 , level 3, old_pte 0x40002a7fffc7ff, new_pte 0x40002a7fffc7ff
CPU 76/KVM-8570 [076] ...2 82538.821628: user_mem_abort: logging_active 1, vcpu_id 76, f_ipa 0x83fffc000 , fault_status 0x4, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 76/KVM-8570 [076] ...2 82538.821629: stage2_map_walker_try_leaf_equal: addr 0x83fffc000 , level 3, old_pte 0x40002a7fffc7ff, new_pte 0x40002a7fffc7ff
CPU 79/KVM-8573 [079] ...2 82538.821631: user_mem_abort: logging_active 1, vcpu_id 79, f_ipa 0x83fffc000 , fault_status 0x4, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 79/KVM-8573 [079] ...2 82538.821631: stage2_map_walker_try_leaf_equal: addr 0x83fffc000 , level 3, old_pte 0x40002a7fffc7ff, new_pte 0x40002a7fffc7ff
CPU 74/KVM-8568 [074] ...2 82538.821633: user_mem_abort: logging_active 1, vcpu_id 74, f_ipa 0x83fffc000 , fault_status 0x4, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 74/KVM-8568 [074] ...2 82538.821633: stage2_map_walker_try_leaf_equal: addr 0x83fffc000 , level 3, old_pte 0x40002a7fffc7ff, new_pte 0x40002a7fffc7ff

Small loops:
CPU 65/KVM-8559 [065] ...2 82562.645538: user_mem_abort: logging_active 1, vcpu_id 65, f_ipa 0x80e82c000 , fault_status 0x4, prot 0x4, vma_pagesize 4096 , write_fault 0, exec_fault 0
CPU 65/KVM-8559 [065] ...2 82562.645570: user_mem_abort: logging_active 1, vcpu_id 65, f_ipa 0x80e82c000 , fault_status 0xc, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 67/KVM-8561 [067] ...2 82562.645615: user_mem_abort: logging_active 1, vcpu_id 67, f_ipa 0x80e82c000 , fault_status 0x4, prot 0x4, vma_pagesize 4096 , write_fault 0, exec_fault 0
CPU 67/KVM-8561 [067] ...2 82562.645616: stage2_map_walker_try_leaf_unequal: addr 0x80e82c000 , level 3, old_pte 0x40002a4e82c7ff, new_pte 0x40002a4e82c77f
CPU 67/KVM-8561 [067] ...2 82562.645651: user_mem_abort: logging_active 1, vcpu_id 67, f_ipa 0x80e82c000 , fault_status 0xc, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0

CPU 19/KVM-8510 [019] ...2 82562.645840: user_mem_abort: logging_active 1, vcpu_id 19, f_ipa 0x7e6aa6000 , fault_status 0x4, prot 0x4, vma_pagesize 4096 , write_fault 0, exec_fault 0
CPU 19/KVM-8510 [019] ...2 82562.645873: user_mem_abort: logging_active 1, vcpu_id 19, f_ipa 0x7e6aa6000 , fault_status 0xc, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 16/KVM-8507 [016] ...2 82562.645876: user_mem_abort: logging_active 1, vcpu_id 16, f_ipa 0x7e6aa6000 , fault_status 0x4, prot 0x4, vma_pagesize 4096 , write_fault 0, exec_fault 0
CPU 16/KVM-8507 [016] ...2 82562.645877: stage2_map_walker_try_leaf_unequal: addr 0x7e6aa6000 , level 3, old_pte 0x40002aa6aa67ff, new_pte 0x40002aa6aa677f
CPU 19/KVM-8510 [019] ...2 82562.645902: user_mem_abort: logging_active 1, vcpu_id 19, f_ipa 0x7e6aa6000 , fault_status 0x4, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 19/KVM-8510 [019] ...2 82562.645903: stage2_map_walker_try_leaf_unequal: addr 0x7e6aa6000 , level 3, old_pte 0x40002aa6aa677f, new_pte 0x40002aa6aa67ff
CPU 16/KVM-8507 [016] ...2 82562.645908: user_mem_abort: logging_active 1, vcpu_id 16, f_ipa 0x7e6aa6000 , fault_status 0xc, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0

CPU 90/KVM-8585 [090] ...2 82562.646211: user_mem_abort: logging_active 1, vcpu_id 90, f_ipa 0x818f9f000 , fault_status 0x4, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 47/KVM-8539 [047] ...2 82562.646218: user_mem_abort: logging_active 1, vcpu_id 47, f_ipa 0x8189ff000 , fault_status 0x4, prot 0x4, vma_pagesize 4096 , write_fault 0, exec_fault 0
CPU 47/KVM-8539 [047] ...2 82562.646219: stage2_map_walker_try_leaf_equal: addr 0x8189ff000 , level 3, old_pte 0x40002a589ff77f, new_pte 0x40002a589ff77f
CPU 42/KVM-8534 [042] ...2 82562.646221: user_mem_abort: logging_active 1, vcpu_id 42, f_ipa 0x8189ff000 , fault_status 0xc, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 59/KVM-8552 [059] ...2 82562.646226: user_mem_abort: logging_active 1, vcpu_id 59, f_ipa 0x8189ff000 , fault_status 0x4, prot 0x4, vma_pagesize 4096 , write_fault 0, exec_fault 0
CPU 59/KVM-8552 [059] ...2 82562.646227: stage2_map_walker_try_leaf_unequal: addr 0x8189ff000 , level 3, old_pte 0x40002a589ff7ff, new_pte 0x40002a589ff77f
CPU 65/KVM-8559 [065] ...2 82562.646231: user_mem_abort: logging_active 1, vcpu_id 65, f_ipa 0x8189ff000 , fault_status 0x4, prot 0x4, vma_pagesize 4096 , write_fault 0, exec_fault 0
CPU 65/KVM-8559 [065] ...2 82562.646231: stage2_map_walker_try_leaf_equal: addr 0x8189ff000 , level 3, old_pte 0x40002a589ff77f, new_pte 0x40002a589ff77f
CPU 50/KVM-8542 [050] ...2 82562.646232: user_mem_abort: logging_active 1, vcpu_id 50, f_ipa 0x8189ff000 , fault_status 0x4, prot 0x4, vma_pagesize 4096 , write_fault 0, exec_fault 0

*********************************************************************************
Recreating the same mappings, and small loops when dirty-logging is stopped.
*********************************************************************************
Recreating the same mappings:
CPU 3/KVM-26553 [003] ...2 29434.296315: user_mem_abort: logging_active 1, vcpu_id 3 , f_ipa 0x783627000 , fault_status 0x4, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 3/KVM-26553 [003] ...2 29434.296319: user_mem_abort: logging_active 1, vcpu_id 3 , f_ipa 0x7e2a28000 , fault_status 0x4, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 3/KVM-26553 [003] ...2 29434.296326: user_mem_abort: logging_active 1, vcpu_id 3 , f_ipa 0x783628000 , fault_status 0x4, prot 0x6, vma_pagesize 4096 , write_fault 1, exec_fault 0
CPU 3/KVM-26553 [003] ...2 29434.296332: user_mem_abort: logging_active 0, vcpu_id 3 , f_ipa 0x7e2a29000 , fault_status 0x4, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 3/KVM-26553 [003] ...2 29434.297387: user_mem_abort: logging_active 0, vcpu_id 3 , f_ipa 0x783629000 , fault_status 0x4, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 82/KVM-26634 [082] ...2 29434.297665: user_mem_abort: logging_active 0, vcpu_id 82, f_ipa 0x819a98000 , fault_status 0xc, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 93/KVM-26645 [093] ...2 29434.300387: user_mem_abort: logging_active 0, vcpu_id 93, f_ipa 0x819ae6000 , fault_status 0xc, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 93/KVM-26645 [093] ...2 29434.300391: stage2_map_walker_try_leaf_equal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x402028400007fd
CPU 92/KVM-26644 [092] ...2 29434.300392: user_mem_abort: logging_active 0, vcpu_id 92, f_ipa 0x819ae2000 , fault_status 0xc, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 92/KVM-26644 [092] ...2 29434.300394: stage2_map_walker_try_leaf_equal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x402028400007fd
CPU 85/KVM-26637 [085] ...2 29434.300396: user_mem_abort: logging_active 0, vcpu_id 85, f_ipa 0x819aa4000 , fault_status 0xc, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 85/KVM-26637 [085] ...2 29434.300400: stage2_map_walker_try_leaf_equal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x402028400007fd
CPU 84/KVM-26636 [084] ...2 29434.300402: user_mem_abort: logging_active 0, vcpu_id 84, f_ipa 0x819aa0000 , fault_status 0xc, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 84/KVM-26636 [084] ...2 29434.300403: stage2_map_walker_try_leaf_equal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x402028400007fd
CPU 87/KVM-26639 [087] ...2 29434.300405: user_mem_abort: logging_active 0, vcpu_id 87, f_ipa 0x819aac000 , fault_status 0xc, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 87/KVM-26639 [087] ...2 29434.300407: stage2_map_walker_try_leaf_equal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x402028400007fd
CPU 3/KVM-26553 [003] ...2 29434.300410: user_mem_abort: logging_active 0, vcpu_id 3 , f_ipa 0x80c321000 , fault_status 0x4, prot 0x7, vma_pagesize 1073741824, write_fault 0, exec_fault 1
CPU 3/KVM-26553 [003] ...2 29434.300412: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x2028400007fd
CPU 88/KVM-26640 [088] ...2 29434.300417: user_mem_abort: logging_active 0, vcpu_id 88, f_ipa 0x83fffe000 , fault_status 0x4, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 88/KVM-26640 [088] ...2 29434.300420: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x2028400007fd , new_pte 0x402028400007fd
CPU 80/KVM-26632 [080] ...2 29434.300425: user_mem_abort: logging_active 0, vcpu_id 80, f_ipa 0x83fffe000 , fault_status 0x4, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 80/KVM-26632 [080] ...2 29434.300427: stage2_map_walker_try_leaf_equal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x402028400007fd
CPU 32/KVM-26582 [032] ...2 29434.300431: user_mem_abort: logging_active 0, vcpu_id 32, f_ipa 0x83fffe000 , fault_status 0x4, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 32/KVM-26582 [032] ...2 29434.300435: stage2_map_walker_try_leaf_equal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x402028400007fd
CPU 25/KVM-26575 [025] ...2 29434.300438: user_mem_abort: logging_active 0, vcpu_id 25, f_ipa 0x83fffe000 , fault_status 0x4, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0

Small loops:
CPU 8/KVM-26558 [008] ...2 29434.303759: user_mem_abort: logging_active 0, vcpu_id 8 , f_ipa 0x83fffe000 , fault_status 0x4, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 8/KVM-26558 [008] ...2 29434.303761: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x2028400007fd , new_pte 0x402028400007fd
CPU 3/KVM-26553 [003] ...2 29434.303767: user_mem_abort: logging_active 0, vcpu_id 3 , f_ipa 0x80c321000 , fault_status 0x4, prot 0x7, vma_pagesize 1073741824, write_fault 0, exec_fault 1
CPU 3/KVM-26553 [003] ...2 29434.303767: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x2028400007fd
CPU 8/KVM-26558 [008] ...2 29434.303773: user_mem_abort: logging_active 0, vcpu_id 8 , f_ipa 0x83fffe000 , fault_status 0x4, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 8/KVM-26558 [008] ...2 29434.303774: stage2_map_walker_try_leaf_equal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x402028400007fd
CPU 3/KVM-26553 [003] ...2 29434.303776: user_mem_abort: logging_active 0, vcpu_id 3 , f_ipa 0x80c321000 , fault_status 0x4, prot 0x7, vma_pagesize 1073741824, write_fault 0, exec_fault 1
CPU 3/KVM-26553 [003] ...2 29434.303777: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x2028400007fd
CPU 8/KVM-26558 [008] ...2 29434.303780: user_mem_abort: logging_active 0, vcpu_id 8 , f_ipa 0x83f952000 , fault_status 0x4, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 8/KVM-26558 [008] ...2 29434.303780: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x2028400007fd , new_pte 0x402028400007fd
CPU 3/KVM-26553 [003] ...2 29434.303786: user_mem_abort: logging_active 0, vcpu_id 3 , f_ipa 0x80c321000 , fault_status 0x4, prot 0x7, vma_pagesize 1073741824, write_fault 0, exec_fault 1
CPU 3/KVM-26553 [003] ...2 29434.303786: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x2028400007fd
CPU 8/KVM-26558 [008] ...2 29434.303793: user_mem_abort: logging_active 0, vcpu_id 8 , f_ipa 0x83f952000 , fault_status 0x4, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 8/KVM-26558 [008] ...2 29434.303793: stage2_map_walker_try_leaf_equal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x402028400007fd
CPU 3/KVM-26553 [003] ...2 29434.303795: user_mem_abort: logging_active 0, vcpu_id 3 , f_ipa 0x80c321000 , fault_status 0x4, prot 0x7, vma_pagesize 1073741824, write_fault 0, exec_fault 1
CPU 3/KVM-26553 [003] ...2 29434.303796: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x2028400007fd
CPU 8/KVM-26558 [008] ...2 29434.303799: user_mem_abort: logging_active 0, vcpu_id 8 , f_ipa 0x83fffe000 , fault_status 0x4, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 8/KVM-26558 [008] ...2 29434.303799: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x2028400007fd , new_pte 0x402028400007fd
CPU 3/KVM-26553 [003] ...2 29434.303805: user_mem_abort: logging_active 0, vcpu_id 3 , f_ipa 0x80c321000 , fault_status 0x4, prot 0x7, vma_pagesize 1073741824, write_fault 0, exec_fault 1
CPU 3/KVM-26553 [003] ...2 29434.303805: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x2028400007fd
CPU 8/KVM-26558 [008] ...2 29434.303812: user_mem_abort: logging_active 0, vcpu_id 8 , f_ipa 0x83f952000 , fault_status 0x4, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 8/KVM-26558 [008] ...2 29434.303812: stage2_map_walker_try_leaf_equal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x402028400007fd
CPU 3/KVM-26553 [003] ...2 29434.303814: user_mem_abort: logging_active 0, vcpu_id 3 , f_ipa 0x80c321000 , fault_status 0x4, prot 0x7, vma_pagesize 1073741824, write_fault 0, exec_fault 1
CPU 3/KVM-26553 [003] ...2 29434.303815: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x2028400007fd
CPU 8/KVM-26558 [008] ...2 29434.303818: user_mem_abort: logging_active 0, vcpu_id 8 , f_ipa 0x83f952000 , fault_status 0x4, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 8/KVM-26558 [008] ...2 29434.303818: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x2028400007fd , new_pte 0x402028400007fd
CPU 3/KVM-26553 [003] ...2 29434.303823: user_mem_abort: logging_active 0, vcpu_id 3 , f_ipa 0x80c321000 , fault_status 0x4, prot 0x7, vma_pagesize 1073741824, write_fault 0, exec_fault 1
CPU 3/KVM-26553 [003] ...2 29434.303824: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x2028400007fd
CPU 8/KVM-26558 [008] ...2 29434.303828: user_mem_abort: logging_active 0, vcpu_id 8 , f_ipa 0x83f952000 , fault_status 0x4, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 8/KVM-26558 [008] ...2 29434.303829: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x2028400007fd , new_pte 0x402028400007fd
CPU 3/KVM-26553 [003] ...2 29434.303834: user_mem_abort: logging_active 0, vcpu_id 3 , f_ipa 0x80c321000 , fault_status 0x4, prot 0x7, vma_pagesize 1073741824, write_fault 0, exec_fault 1
CPU 3/KVM-26553 [003] ...2 29434.303835: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x2028400007fd
CPU 8/KVM-26558 [008] ...2 29434.303838: user_mem_abort: logging_active 0, vcpu_id 8 , f_ipa 0x83fffe000 , fault_status 0x4, prot 0x6, vma_pagesize 1073741824, write_fault 1, exec_fault 0
CPU 8/KVM-26558 [008] ...2 29434.303839: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x2028400007fd , new_pte 0x402028400007fd
CPU 3/KVM-26553 [003] ...2 29434.303844: user_mem_abort: logging_active 0, vcpu_id 3 , f_ipa 0x80c321000 , fault_status 0x4, prot 0x7, vma_pagesize 1073741824, write_fault 0, exec_fault 1
CPU 3/KVM-26553 [003] ...2 29434.303845: stage2_map_walker_try_leaf_unequal: addr 0x800000000 , level 1, old_pte 0x402028400007fd, new_pte 0x2028400007fd


2020-12-12 09:56:02

by Yanan Wang

[permalink] [raw]
Subject: [RFC PATCH] KVM: arm64: Add prejudgement for relaxing permissions only case in stage2 translation fault handler

In dirty-logging, or dirty-logging-stopped time, even normal running
time of a guest configed with huge mappings and numbers of vCPUs,
translation faults by different vCPUs on the same GPA could occur
successively almost at the same time. There are two reasons for it.

(1) If there are some vCPUs accessing the same GPA at the same time
and the leaf PTE is not set yet, then they will all cause translation
faults and the first vCPU holding mmu_lock will set valid leaf PTE,
and the others will later choose to update the leaf PTE or not.

(2) When changing a leaf entry or a table entry with break-before-make,
if there are some vCPUs accessing the same GPA just catch the moment
when the target PTE is set invalid in a BBM procedure coincidentally,
they will all cause translation faults and will later choose to update
the leaf PTE or not.

The worst case can be like this: some vCPUs cause translation faults
on the same GPA with different prots, they will fight each other by
changing back access permissions of the PTE with break-before-make.
And the BBM-invalid moment might trigger more unnecessary translation
faults. As a result, some useless small loops will occur, which could
lead to vCPU stuck.

To avoid unnecessary update and small loops, add prejudgement in the
translation fault handler: Skip updating the valid leaf PTE if we are
trying to recreate exactly the same mapping or to reduce access
permissions only(such as RW-->RO). And update the valid leaf PTE without
break-before-make if we are trying to add more permissions only.

Signed-off-by: Yanan Wang <[email protected]>
---
arch/arm64/kvm/hyp/pgtable.c | 73 +++++++++++++++++++++++++-----------
1 file changed, 52 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 23a01dfcb27a..f8b3248cef1c 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -45,6 +45,8 @@

#define KVM_PTE_LEAF_ATTR_HI_S2_XN BIT(54)

+#define KVM_PTE_LEAF_ATTR_PERMS (GENMASK(7, 6) | BIT(54))
+
struct kvm_pgtable_walk_data {
struct kvm_pgtable *pgt;
struct kvm_pgtable_walker *walker;
@@ -170,10 +172,9 @@ static void kvm_set_table_pte(kvm_pte_t *ptep, kvm_pte_t *childp)
smp_store_release(ptep, pte);
}

-static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, u64 pa, kvm_pte_t attr,
- u32 level)
+static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, u32 level)
{
- kvm_pte_t old = *ptep, pte = kvm_phys_to_pte(pa);
+ kvm_pte_t pte = kvm_phys_to_pte(pa);
u64 type = (level == KVM_PGTABLE_MAX_LEVELS - 1) ? KVM_PTE_TYPE_PAGE :
KVM_PTE_TYPE_BLOCK;

@@ -181,12 +182,7 @@ static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, u64 pa, kvm_pte_t attr,
pte |= FIELD_PREP(KVM_PTE_TYPE, type);
pte |= KVM_PTE_VALID;

- /* Tolerate KVM recreating the exact same mapping. */
- if (kvm_pte_valid(old))
- return old == pte;
-
- smp_store_release(ptep, pte);
- return true;
+ return pte;
}

static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data, u64 addr,
@@ -341,12 +337,17 @@ static int hyp_map_set_prot_attr(enum kvm_pgtable_prot prot,
static bool hyp_map_walker_try_leaf(u64 addr, u64 end, u32 level,
kvm_pte_t *ptep, struct hyp_map_data *data)
{
+ kvm_pte_t new, old = *ptep;
u64 granule = kvm_granule_size(level), phys = data->phys;

if (!kvm_block_mapping_supported(addr, end, phys, level))
return false;

- WARN_ON(!kvm_set_valid_leaf_pte(ptep, phys, data->attr, level));
+ /* Tolerate KVM recreating the exact same mapping. */
+ new = kvm_init_valid_leaf_pte(phys, data->attr, level);
+ if (old != new && !WARN_ON(kvm_pte_valid(old)))
+ smp_store_release(ptep, new);
+
data->phys += granule;
return true;
}
@@ -461,25 +462,56 @@ static int stage2_map_set_prot_attr(enum kvm_pgtable_prot prot,
return 0;
}

+static bool stage2_set_valid_leaf_pte_pre(u64 addr, u32 level,
+ kvm_pte_t *ptep, kvm_pte_t new,
+ struct stage2_map_data *data)
+{
+ kvm_pte_t old = *ptep, old_attr, new_attr;
+
+ if ((old ^ new) & (~KVM_PTE_LEAF_ATTR_PERMS))
+ return false;
+
+ /*
+ * Skip updating if we are trying to recreate exactly the same mapping
+ * or to reduce the access permissions only. And update the valid leaf
+ * PTE without break-before-make if we are trying to add more access
+ * permissions only.
+ */
+ old_attr = (old & KVM_PTE_LEAF_ATTR_PERMS) ^ KVM_PTE_LEAF_ATTR_HI_S2_XN;
+ new_attr = (new & KVM_PTE_LEAF_ATTR_PERMS) ^ KVM_PTE_LEAF_ATTR_HI_S2_XN;
+ if (new_attr <= old_attr)
+ return true;
+
+ WRITE_ONCE(*ptep, new);
+ kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);
+
+ return true;
+}
+
static bool stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
kvm_pte_t *ptep,
struct stage2_map_data *data)
{
+ kvm_pte_t new, old = *ptep;
u64 granule = kvm_granule_size(level), phys = data->phys;
+ struct page *page = virt_to_page(ptep);

if (!kvm_block_mapping_supported(addr, end, phys, level))
return false;

- if (kvm_pte_valid(*ptep))
- put_page(virt_to_page(ptep));
+ new = kvm_init_valid_leaf_pte(phys, data->attr, level);
+ if (kvm_pte_valid(old)) {
+ if (stage2_set_valid_leaf_pte_pre(addr, level, ptep, new, data))
+ goto out;

- if (kvm_set_valid_leaf_pte(ptep, phys, data->attr, level))
- goto out;
+ /* Update the PTE with break-before-make if it's necessary. */
+ kvm_set_invalid_pte(ptep);
+ kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);
+ put_page(page);
+ }

- /* There's an existing valid leaf entry, so perform break-before-make */
- kvm_set_invalid_pte(ptep);
- kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);
- kvm_set_valid_leaf_pte(ptep, phys, data->attr, level);
+ smp_store_release(ptep, new);
+ get_page(page);
out:
data->phys += granule;
return true;
@@ -521,7 +553,7 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
}

if (stage2_map_walker_try_leaf(addr, end, level, ptep, data))
- goto out_get_page;
+ return 0;

if (WARN_ON(level == KVM_PGTABLE_MAX_LEVELS - 1))
return -EINVAL;
@@ -545,9 +577,8 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
}

kvm_set_table_pte(ptep, childp);
-
-out_get_page:
get_page(page);
+
return 0;
}

--
2.19.1

2020-12-12 10:50:30

by Marc Zyngier

[permalink] [raw]
Subject: Re: [RFC PATCH] KVM: arm64: Add prejudgement for relaxing permissions only case in stage2 translation fault handler

Hi Yanan,

On 2020-12-11 08:01, Yanan Wang wrote:
> In dirty-logging, or dirty-logging-stopped time, even normal running
> time of a guest configed with huge mappings and numbers of vCPUs,
> translation faults by different vCPUs on the same GPA could occur
> successively almost at the same time. There are two reasons for it.
>
> (1) If there are some vCPUs accessing the same GPA at the same time
> and the leaf PTE is not set yet, then they will all cause translation
> faults and the first vCPU holding mmu_lock will set valid leaf PTE,
> and the others will later choose to update the leaf PTE or not.
>
> (2) When changing a leaf entry or a table entry with break-before-make,
> if there are some vCPUs accessing the same GPA just catch the moment
> when the target PTE is set invalid in a BBM procedure coincidentally,
> they will all cause translation faults and will later choose to update
> the leaf PTE or not.
>
> The worst case can be like this: some vCPUs cause translation faults
> on the same GPA with different prots, they will fight each other by
> changing back access permissions of the PTE with break-before-make.
> And the BBM-invalid moment might trigger more unnecessary translation
> faults. As a result, some useless small loops will occur, which could
> lead to vCPU stuck.
>
> To avoid unnecessary update and small loops, add prejudgement in the
> translation fault handler: Skip updating the valid leaf PTE if we are
> trying to recreate exactly the same mapping or to reduce access
> permissions only(such as RW-->RO). And update the valid leaf PTE
> without
> break-before-make if we are trying to add more permissions only.

I'm a bit perplexed with this: why are you skipping the update if the
permissions need to be reduced? Even more, how can we reduce the
permissions from a vCPU fault? I can't really think of a scenario where
that happens.

Or are you describing a case where two vcpus fault simultaneously with
conflicting permissions:

- Both vcpus fault on translation fault
- vcpu A wants W access
- vpcu B wants R access

and 'A' gets in first, set the permissions to RW (because R is
implicitly added to W), followed by 'B' which downgrades it to RO?

If that's what you are describing, then I agree we could do better.

>
> Signed-off-by: Yanan Wang <[email protected]>
> ---
> arch/arm64/kvm/hyp/pgtable.c | 73 +++++++++++++++++++++++++-----------
> 1 file changed, 52 insertions(+), 21 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c
> b/arch/arm64/kvm/hyp/pgtable.c
> index 23a01dfcb27a..f8b3248cef1c 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -45,6 +45,8 @@
>
> #define KVM_PTE_LEAF_ATTR_HI_S2_XN BIT(54)
>
> +#define KVM_PTE_LEAF_ATTR_PERMS (GENMASK(7, 6) | BIT(54))
> +
> struct kvm_pgtable_walk_data {
> struct kvm_pgtable *pgt;
> struct kvm_pgtable_walker *walker;
> @@ -170,10 +172,9 @@ static void kvm_set_table_pte(kvm_pte_t *ptep,
> kvm_pte_t *childp)
> smp_store_release(ptep, pte);
> }
>
> -static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, u64 pa, kvm_pte_t
> attr,
> - u32 level)
> +static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, u32
> level)
> {
> - kvm_pte_t old = *ptep, pte = kvm_phys_to_pte(pa);
> + kvm_pte_t pte = kvm_phys_to_pte(pa);
> u64 type = (level == KVM_PGTABLE_MAX_LEVELS - 1) ? KVM_PTE_TYPE_PAGE
> :
> KVM_PTE_TYPE_BLOCK;
>
> @@ -181,12 +182,7 @@ static bool kvm_set_valid_leaf_pte(kvm_pte_t
> *ptep, u64 pa, kvm_pte_t attr,
> pte |= FIELD_PREP(KVM_PTE_TYPE, type);
> pte |= KVM_PTE_VALID;
>
> - /* Tolerate KVM recreating the exact same mapping. */
> - if (kvm_pte_valid(old))
> - return old == pte;
> -
> - smp_store_release(ptep, pte);
> - return true;
> + return pte;
> }
>
> static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data,
> u64 addr,
> @@ -341,12 +337,17 @@ static int hyp_map_set_prot_attr(enum
> kvm_pgtable_prot prot,
> static bool hyp_map_walker_try_leaf(u64 addr, u64 end, u32 level,
> kvm_pte_t *ptep, struct hyp_map_data *data)
> {
> + kvm_pte_t new, old = *ptep;
> u64 granule = kvm_granule_size(level), phys = data->phys;
>
> if (!kvm_block_mapping_supported(addr, end, phys, level))
> return false;
>
> - WARN_ON(!kvm_set_valid_leaf_pte(ptep, phys, data->attr, level));
> + /* Tolerate KVM recreating the exact same mapping. */
> + new = kvm_init_valid_leaf_pte(phys, data->attr, level);
> + if (old != new && !WARN_ON(kvm_pte_valid(old)))
> + smp_store_release(ptep, new);
> +
> data->phys += granule;
> return true;
> }
> @@ -461,25 +462,56 @@ static int stage2_map_set_prot_attr(enum
> kvm_pgtable_prot prot,
> return 0;
> }
>
> +static bool stage2_set_valid_leaf_pte_pre(u64 addr, u32 level,
> + kvm_pte_t *ptep, kvm_pte_t new,
> + struct stage2_map_data *data)
> +{
> + kvm_pte_t old = *ptep, old_attr, new_attr;
> +
> + if ((old ^ new) & (~KVM_PTE_LEAF_ATTR_PERMS))
> + return false;
> +
> + /*
> + * Skip updating if we are trying to recreate exactly the same
> mapping
> + * or to reduce the access permissions only. And update the valid
> leaf
> + * PTE without break-before-make if we are trying to add more access
> + * permissions only.
> + */
> + old_attr = (old & KVM_PTE_LEAF_ATTR_PERMS) ^
> KVM_PTE_LEAF_ATTR_HI_S2_XN;
> + new_attr = (new & KVM_PTE_LEAF_ATTR_PERMS) ^
> KVM_PTE_LEAF_ATTR_HI_S2_XN;
> + if (new_attr <= old_attr)
> + return true;
> +
> + WRITE_ONCE(*ptep, new);
> + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);

I think what bothers me the most here is that we are turning a mapping
into
a permission update, which makes the code really hard to read, and mixes
two things that were so far separate.

I wonder whether we should instead abort the update and simply take the
fault
again, if we ever need to do it.

Thanks,

M.
--
Jazz is not dead. It just smells funny...

2020-12-14 13:56:14

by Yanan Wang

[permalink] [raw]
Subject: Re: [RFC PATCH] KVM: arm64: Add prejudgement for relaxing permissions only case in stage2 translation fault handler


On 2020/12/11 17:49, Marc Zyngier wrote:
> Hi Yanan,
>
> On 2020-12-11 08:01, Yanan Wang wrote:
>> In dirty-logging, or dirty-logging-stopped time, even normal running
>> time of a guest configed with huge mappings and numbers of vCPUs,
>> translation faults by different vCPUs on the same GPA could occur
>> successively almost at the same time. There are two reasons for it.
>>
>> (1) If there are some vCPUs accessing the same GPA at the same time
>> and the leaf PTE is not set yet, then they will all cause translation
>> faults and the first vCPU holding mmu_lock will set valid leaf PTE,
>> and the others will later choose to update the leaf PTE or not.
>>
>> (2) When changing a leaf entry or a table entry with break-before-make,
>> if there are some vCPUs accessing the same GPA just catch the moment
>> when the target PTE is set invalid in a BBM procedure coincidentally,
>> they will all cause translation faults and will later choose to update
>> the leaf PTE or not.
>>
>> The worst case can be like this: some vCPUs cause translation faults
>> on the same GPA with different prots, they will fight each other by
>> changing back access permissions of the PTE with break-before-make.
>> And the BBM-invalid moment might trigger more unnecessary translation
>> faults. As a result, some useless small loops will occur, which could
>> lead to vCPU stuck.
>>
>> To avoid unnecessary update and small loops, add prejudgement in the
>> translation fault handler: Skip updating the valid leaf PTE if we are
>> trying to recreate exactly the same mapping or to reduce access
>> permissions only(such as RW-->RO). And update the valid leaf PTE without
>> break-before-make if we are trying to add more permissions only.
>
> I'm a bit perplexed with this: why are you skipping the update if the
> permissions need to be reduced? Even more, how can we reduce the
> permissions from a vCPU fault? I can't really think of a scenario where
> that happens.
>
> Or are you describing a case where two vcpus fault simultaneously with
> conflicting permissions:
>
> - Both vcpus fault on translation fault
> - vcpu A wants W access
> - vpcu B wants R access
>
> and 'A' gets in first, set the permissions to RW (because R is
> implicitly added to W), followed by 'B' which downgrades it to RO?
>
> If that's what you are describing, then I agree we could do better.
Yes, this is exactly what I want to describe.
>
>>
>> Signed-off-by: Yanan Wang <[email protected]>
>> ---
>>  arch/arm64/kvm/hyp/pgtable.c | 73 +++++++++++++++++++++++++-----------
>>  1 file changed, 52 insertions(+), 21 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
>> index 23a01dfcb27a..f8b3248cef1c 100644
>> --- a/arch/arm64/kvm/hyp/pgtable.c
>> +++ b/arch/arm64/kvm/hyp/pgtable.c
>> @@ -45,6 +45,8 @@
>>
>>  #define KVM_PTE_LEAF_ATTR_HI_S2_XN    BIT(54)
>>
>> +#define KVM_PTE_LEAF_ATTR_PERMS    (GENMASK(7, 6) | BIT(54))
>> +
>>  struct kvm_pgtable_walk_data {
>>      struct kvm_pgtable        *pgt;
>>      struct kvm_pgtable_walker    *walker;
>> @@ -170,10 +172,9 @@ static void kvm_set_table_pte(kvm_pte_t *ptep,
>> kvm_pte_t *childp)
>>      smp_store_release(ptep, pte);
>>  }
>>
>> -static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, u64 pa,
>> kvm_pte_t attr,
>> -                   u32 level)
>> +static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, u32
>> level)
>>  {
>> -    kvm_pte_t old = *ptep, pte = kvm_phys_to_pte(pa);
>> +    kvm_pte_t pte = kvm_phys_to_pte(pa);
>>      u64 type = (level == KVM_PGTABLE_MAX_LEVELS - 1) ?
>> KVM_PTE_TYPE_PAGE :
>>                                 KVM_PTE_TYPE_BLOCK;
>>
>> @@ -181,12 +182,7 @@ static bool kvm_set_valid_leaf_pte(kvm_pte_t
>> *ptep, u64 pa, kvm_pte_t attr,
>>      pte |= FIELD_PREP(KVM_PTE_TYPE, type);
>>      pte |= KVM_PTE_VALID;
>>
>> -    /* Tolerate KVM recreating the exact same mapping. */
>> -    if (kvm_pte_valid(old))
>> -        return old == pte;
>> -
>> -    smp_store_release(ptep, pte);
>> -    return true;
>> +    return pte;
>>  }
>>
>>  static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data
>> *data, u64 addr,
>> @@ -341,12 +337,17 @@ static int hyp_map_set_prot_attr(enum
>> kvm_pgtable_prot prot,
>>  static bool hyp_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>>                      kvm_pte_t *ptep, struct hyp_map_data *data)
>>  {
>> +    kvm_pte_t new, old = *ptep;
>>      u64 granule = kvm_granule_size(level), phys = data->phys;
>>
>>      if (!kvm_block_mapping_supported(addr, end, phys, level))
>>          return false;
>>
>> -    WARN_ON(!kvm_set_valid_leaf_pte(ptep, phys, data->attr, level));
>> +    /* Tolerate KVM recreating the exact same mapping. */
>> +    new = kvm_init_valid_leaf_pte(phys, data->attr, level);
>> +    if (old != new && !WARN_ON(kvm_pte_valid(old)))
>> +        smp_store_release(ptep, new);
>> +
>>      data->phys += granule;
>>      return true;
>>  }
>> @@ -461,25 +462,56 @@ static int stage2_map_set_prot_attr(enum
>> kvm_pgtable_prot prot,
>>      return 0;
>>  }
>>
>> +static bool stage2_set_valid_leaf_pte_pre(u64 addr, u32 level,
>> +                      kvm_pte_t *ptep, kvm_pte_t new,
>> +                      struct stage2_map_data *data)
>> +{
>> +    kvm_pte_t old = *ptep, old_attr, new_attr;
>> +
>> +    if ((old ^ new) & (~KVM_PTE_LEAF_ATTR_PERMS))
>> +        return false;
>> +
>> +    /*
>> +     * Skip updating if we are trying to recreate exactly the same
>> mapping
>> +     * or to reduce the access permissions only. And update the
>> valid leaf
>> +     * PTE without break-before-make if we are trying to add more
>> access
>> +     * permissions only.
>> +     */
>> +    old_attr = (old & KVM_PTE_LEAF_ATTR_PERMS) ^
>> KVM_PTE_LEAF_ATTR_HI_S2_XN;
>> +    new_attr = (new & KVM_PTE_LEAF_ATTR_PERMS) ^
>> KVM_PTE_LEAF_ATTR_HI_S2_XN;
>> +    if (new_attr <= old_attr)
>> +        return true;
>> +
>> +    WRITE_ONCE(*ptep, new);
>> +    kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);
>
> I think what bothers me the most here is that we are turning a mapping
> into
> a permission update, which makes the code really hard to read, and mixes
> two things that were so far separate.
>
> I wonder whether we should instead abort the update and simply take
> the fault
> again, if we ever need to do it.
>
> Thanks,
>
>         M.

2020-12-15 13:22:38

by Marc Zyngier

[permalink] [raw]
Subject: Re: [RFC PATCH] KVM: arm64: Add prejudgement for relaxing permissions only case in stage2 translation fault handler

Hi Yanan,

On 2020-12-14 07:20, wangyanan (Y) wrote:

> diff --git a/arch/arm64/kvm/hyp/pgtable.c
> b/arch/arm64/kvm/hyp/pgtable.c
> index a74a62283012..e3c6133567c4 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -45,6 +45,10 @@
>
>  #define KVM_PTE_LEAF_ATTR_HI_S2_XN     BIT(54)
>
> +#define KVM_PTE_LEAF_ATTR_S2_PERMS (KVM_PTE_LEAF_ATTR_LO_S2_S2AP_R | \
> +        KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W | \
> + KVM_PTE_LEAF_ATTR_HI_S2_XN)
> +
>  struct kvm_pgtable_walk_data {
>         struct kvm_pgtable                   *pgt;
>         struct kvm_pgtable_walker       *walker;
> @@ -473,8 +477,13 @@ static bool stage2_map_walker_try_leaf(u64 addr,
> u64 end, u32 level,
>
>         new = kvm_init_valid_leaf_pte(phys, data->attr, level);
>         if (kvm_pte_valid(old)) {
> -               /* Tolerate KVM recreating the exact same mapping. */
> -               if (old == new)
> +               /*
> +                * Skip updating the PTE with break-before-make if we
> are trying
> +                * to recreate the exact same mapping or only change
> the access
> +                * permissions. Actually, change of permissions will be
> handled
> +                * through the relax_perms path next time if necessary.
> +                */
> +               if (!((old ^ new) & (~KVM_PTE_LEAF_ATTR_S2_PERMS)))
>                         goto out;
>
>                 /* There's an existing different valid leaf entry, so
> perform

I think there is a bit more work to do on this.

One obvious issue is that we currently flag a page as dirty before
handling
the fault. With an early exit, we end-up having spurious dirty pages.

It's not a big deal, but I'd rather mark the page dirty after the
mapping
or the permission update having been successful (at the moment, it
cannot
fails).

Thanks,

M.
--
Jazz is not dead. It just smells funny...