Received: by 2002:ab2:60d1:0:b0:1f7:5705:b850 with SMTP id i17csp887537lqm; Wed, 1 May 2024 21:00:18 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCXRtrIOFz61EkU1+W9bdPgN4XogWVGFcf1Fsm2k6ZT7Dkq6zzPwY4baG83ApSaguU8rd9V8CI0J3+A80F4FKD7rh7aynLaVz7marJ5qbA== X-Google-Smtp-Source: AGHT+IFsK2biFaZWNR23qjHpO4F8IOR3hjQDz/NyrJLt6RX8xA2VNB3SdCIFA5XAsFksQ/8v77u7 X-Received: by 2002:a05:6512:1315:b0:51d:5e78:17fa with SMTP id x21-20020a056512131500b0051d5e7817famr3292501lfu.4.1714622416889; Wed, 01 May 2024 21:00:16 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1714622416; cv=pass; d=google.com; s=arc-20160816; b=xJ9mwActQ4NdVYdOIBXZuUvmb6iG5mlkZv1SkmcrybyVmuG4Wk3ZdFtePXHKV4FXPA 2Y2x2I3CJlr3kDBYgjBFl35Oeh6ewTkojb5RAYAcQsyFGEcB1hm3aJCKnH6vB89pyFFv dMTXsaRY3eSgK5JcSuGAn+MiMibj08JauJ69IVpRJtEqBhMVCTXgSKUzRBnaOAdmHa4k NIYhQFE4i/4hVbC8NrdZUcZPwcr01LTjN4Uvc+WwpzZx5nA0j4WlzYm3UUOuv9vW4EIo gM/c21jiy0lJwAW93ajGm3oTz3CKxIPjHWRA1jOhMk1P/PnYIwo6YtJz8Ukv2a+9VNEY oZbQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=FaVXZERQHuk+FInxg4by/ZUzUIlPzZXFaJJ+IiLvxFU=; fh=Iwy4q+VZTcRTgeQ8s0F9Bu+hpnJc8n0pwmMDOkEdtfo=; b=hNuPMFEgW75gSXxrQUz7RzuzZPD8pWK2WisX45w1h0XWh0QESnw3DxnG7S6RokOFOu CWT+IpaU/HcV+R19JAZX1Rg/FnkqAp9t7XZsrUoBDOQwRd9ivWigmoJzy4Di5c4lTp0j 8AP04LZdz/IF34UuwNeBh59jIpbwKLgZ6k/VVqMuDRvzdWB55dxC+Jv1H9u3x1DcE95j a8UpGprcFyrQfcWv6KLG9+BrcPRSDBRULkVkRDwNxGuqKh/HCTW8NITKk71XwD62KXJJ 9WwduCbmhs9ouh8X5piN01rfJr9nNWMvBljvI/+fBou1I0TwD4qJmhumT4ElI0iqrkZb 8cMA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=fail header.i=@morinfr.org header.s=20170427 header.b=pssXSrFJ; arc=pass (i=1 spf=pass spfdomain=morinfr.org dkim=pass dkdomain=morinfr.org dmarc=pass fromdomain=morinfr.org); spf=pass (google.com: domain of linux-kernel+bounces-165949-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-165949-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=morinfr.org Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id f22-20020a170906085600b00a5569c15e77si41097ejd.993.2024.05.01.21.00.16 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 May 2024 21:00:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-165949-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=fail header.i=@morinfr.org header.s=20170427 header.b=pssXSrFJ; arc=pass (i=1 spf=pass spfdomain=morinfr.org dkim=pass dkdomain=morinfr.org dmarc=pass fromdomain=morinfr.org); spf=pass (google.com: domain of linux-kernel+bounces-165949-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-165949-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=morinfr.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 5A5BC1F2312B for ; Thu, 2 May 2024 04:00:16 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B8E3B1B815; Thu, 2 May 2024 04:00:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=morinfr.org header.i=@morinfr.org header.b="pssXSrFJ" Received: from smtp2-g21.free.fr (smtp2-g21.free.fr [212.27.42.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 019D9D527; Thu, 2 May 2024 04:00:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=212.27.42.2 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714622408; cv=none; b=LLrHGc5U4EQibMC4e1pUJK1IkaOJBcigonQEFeC35HPdJVv7bhTSp5WTqzMHxpTJuAE25+lQ7N5LDHsQF6c8NJQNhHk/ZLUlqzWAswpiesPXfZCO1E5Oww6pDTP8VjsV6xKQ7nPVeDRrimLV40oUgg+MIONtLUIkj2e3xv1y8lY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714622408; c=relaxed/simple; bh=gyjGn7kCm3lE1xA8rN1K4cFmxnP8VfK+E7RaMCYkQd0=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=OkGXdjqS28elK4N+fl4s5d7iaoDgyeKQ75EGJYoXpwNjiPvHqVdcLZHO5c0qt3IH98oaNXLEsVEIydjrnKpDRhF07kQWNQEexn5ka7XcMIXMCHTn2HF5tzKpJy6rliqbl7WiahjqV+ncRi7hn6vfUpjZgPb6OYqWXHajfm38rUg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=morinfr.org; spf=pass smtp.mailfrom=morinfr.org; dkim=pass (1024-bit key) header.d=morinfr.org header.i=@morinfr.org header.b=pssXSrFJ; arc=none smtp.client-ip=212.27.42.2 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=morinfr.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=morinfr.org Received: from bender.morinfr.org (unknown [82.66.66.112]) by smtp2-g21.free.fr (Postfix) with ESMTPS id 9209920039F; Thu, 2 May 2024 05:59:51 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=morinfr.org ; s=20170427; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID: Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=FaVXZERQHuk+FInxg4by/ZUzUIlPzZXFaJJ+IiLvxFU=; b=pssXSrFJzzoqVOVAPVzkhgjPqG GRp3ij/s/axw0G/UoRI4Zre/F1ussKFHRfI5Xcwxvlg4/KXLCAKSU1XJeW2iVIylQpW+hZFlA3V1B R9lwKF/yZUy1EoO1rxgz2S/lUM0sOyEwlrRVrjpzE514UWrd0K58Kj1p4Jr6K8giyS04=; Received: from guillaum by bender.morinfr.org with local (Exim 4.96) (envelope-from ) id 1s2Nbq-005Zo2-2B; Thu, 02 May 2024 05:59:50 +0200 Date: Thu, 2 May 2024 05:59:50 +0200 From: Guillaume Morin To: David Hildenbrand Cc: Guillaume Morin , oleg@redhat.com, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, muchun.song@linux.dev Subject: Re: [RFC][PATCH] uprobe: support for private hugetlb mappings Message-ID: References: <8d5314ac-5afe-41d4-9d27-9512cd96d21c@redhat.com> <385d3516-95bb-4ff9-9d60-ac4e46104130@redhat.com> <8a7b9e65-b073-4132-9680-efc2b3af6af0@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8a7b9e65-b073-4132-9680-efc2b3af6af0@redhat.com> On 30 Apr 21:25, David Hildenbrand wrote: > > I tried to get the hugepd stuff right but this was the first I heard > > about it :-) Afaict follow_huge_pmd and friends were already DTRT > > I'll have to have a closer look at some details (the hugepd writability > check looks a bit odd), but it's mostly what I would have expected! Ok in the meantime, here is the uprobe change on your current uprobes_cow trying to address the comments you made in your previous message. Some of them were not 100% clear to me, so it's a best effort patch :-) Again lightly tested diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -83,6 +83,10 @@ static const struct fs_parameter_spec hugetlb_fs_parameters[] = { {} }; +bool hugetlbfs_mapping(struct address_space *mapping) { + return mapping->a_ops == &hugetlbfs_aops; +} + /* * Mask used when checking the page offset value passed in via system * calls. This value will be converted to a loff_t which is signed. diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -511,6 +511,8 @@ struct hugetlbfs_sb_info { umode_t mode; }; +bool hugetlbfs_mapping(struct address_space *mapping); + static inline struct hugetlbfs_sb_info *HUGETLBFS_SB(struct super_block *sb) { return sb->s_fs_info; @@ -557,6 +559,8 @@ static inline struct hstate *hstate_inode(struct inode *i) { return NULL; } + +static inline bool hugetlbfs_mapping(struct address_space *mapping) { return false; } #endif /* !CONFIG_HUGETLBFS */ #ifdef HAVE_ARCH_HUGETLB_UNMAPPED_AREA diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -11,6 +11,7 @@ #include #include +#include #include /* read_mapping_page */ #include #include @@ -120,7 +121,7 @@ struct xol_area { */ static bool valid_vma(struct vm_area_struct *vma, bool is_register) { - vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE; + vm_flags_t flags = VM_MAYEXEC | VM_MAYSHARE; if (is_register) flags |= VM_WRITE; @@ -177,6 +178,19 @@ static void copy_to_page(struct page *page, unsigned long vaddr, const void *src kunmap_atomic(kaddr); } +static bool compare_pages(struct page *page1, struct page *page2, unsigned long page_size) +{ + char *addr1, *addr2; + int ret; + + addr1 = kmap_local_page(page1); + addr2 = kmap_local_page(page2); + ret = memcmp(addr1, addr2, page_size); + kunmap_local(addr2); + kunmap_local(addr1); + return ret == 0; +} + static int verify_opcode(struct page *page, unsigned long vaddr, uprobe_opcode_t *new_opcode) { uprobe_opcode_t old_opcode; @@ -366,7 +380,9 @@ static int update_ref_ctr(struct uprobe *uprobe, struct mm_struct *mm, } static bool orig_page_is_identical(struct vm_area_struct *vma, - unsigned long vaddr, struct page *page, bool *large) + unsigned long vaddr, struct page *page, + unsigned long page_size, + bool *large) { const pgoff_t index = vaddr_to_offset(vma, vaddr) >> PAGE_SHIFT; struct page *orig_page = find_get_page(vma->vm_file->f_inode->i_mapping, @@ -380,7 +396,7 @@ static bool orig_page_is_identical(struct vm_area_struct *vma, *large = folio_test_large(orig_folio); identical = folio_test_uptodate(orig_folio) && - pages_identical(page, orig_page); + compare_pages(page, orig_page, page_size); folio_put(orig_folio); return identical; } @@ -396,6 +412,81 @@ struct uwo_data { uprobe_opcode_t opcode; }; +static int __write_opcode_hugetlb(pte_t *ptep, unsigned long page_mask, + unsigned long vaddr, + unsigned long next, struct mm_walk *walk) +{ + struct uwo_data *data = walk->private; + const bool is_register = !!is_swbp_insn(&data->opcode); + pte_t pte = huge_ptep_get(ptep); + struct folio *folio; + struct page *page; + bool large; + struct hstate *h = hstate_vma(walk->vma); + unsigned subpage_index = (vaddr & (huge_page_size(h) - 1)) >> + PAGE_SHIFT; + + if (!pte_present(pte)) + return UWO_RETRY; + page = vm_normal_page(walk->vma, vaddr, pte); + if (!page) + return UWO_RETRY; + folio = page_folio(page); + + /* When unregistering and there is no anon folio anymore, we're done. */ + if (!folio_test_anon(folio)) + return is_register ? UWO_RETRY_WRITE_FAULT : UWO_DONE; + + /* + * See can_follow_write_pte(): we'd actually prefer requiring a + * writable PTE here, but when unregistering we might no longer have + * VM_WRITE ... + */ + if (!huge_pte_write(pte)) { + if (!PageAnonExclusive(page)) + return UWO_RETRY_WRITE_FAULT; + if (unlikely(userfaultfd_wp(walk->vma) && huge_pte_uffd_wp(pte))) + return UWO_RETRY_WRITE_FAULT; + /* SOFTDIRTY is handled via pte_mkdirty() below. */ + } + + /* Unmap + flush the TLB, such that we can write atomically .*/ + flush_cache_page(walk->vma, vaddr & page_mask, pte_pfn(pte)); + pte = huge_ptep_clear_flush(walk->vma, vaddr & page_mask, ptep); + copy_to_page(nth_page(page, subpage_index), data->opcode_vaddr, + &data->opcode, UPROBE_SWBP_INSN_SIZE); + + /* + * When unregistering, we may only zap a PTE if uffd is disabled and + * the folio is not pinned ... + */ + if (is_register || userfaultfd_missing(walk->vma) || + folio_maybe_dma_pinned(folio)) + goto remap; + + /* + * ... the mapped anon page is identical to the original page (that + * will get faulted in on next access), and we don't have GUP pins. + */ + if (!orig_page_is_identical(walk->vma, vaddr & page_mask, page, + huge_page_size(h), &large)) + goto remap; + + hugetlb_remove_rmap(folio); + folio_put(folio); + return UWO_DONE; +remap: + /* + * Make sure that our copy_to_page() changes become visible before the + * set_huge_pte_at() write. + */ + smp_wmb(); + /* We modified the page. Make sure to mark the PTE dirty. */ + set_huge_pte_at(walk->mm , vaddr & page_mask, ptep, + huge_pte_mkdirty(pte), huge_page_size(h)); + return UWO_DONE; +} + static int __write_opcode_pte(pte_t *ptep, unsigned long vaddr, unsigned long next, struct mm_walk *walk) { @@ -447,7 +538,7 @@ static int __write_opcode_pte(pte_t *ptep, unsigned long vaddr, * ... the mapped anon page is identical to the original page (that * will get faulted in on next access), and we don't have GUP pins. */ - if (!orig_page_is_identical(walk->vma, vaddr, page, &large)) + if (!orig_page_is_identical(walk->vma, vaddr, page, PAGE_SIZE, &large)) goto remap; /* Zap it and try to reclaim swap space. */ @@ -473,6 +564,7 @@ static int __write_opcode_pte(pte_t *ptep, unsigned long vaddr, } static const struct mm_walk_ops write_opcode_ops = { + .hugetlb_entry = __write_opcode_hugetlb, .pte_entry = __write_opcode_pte, .walk_lock = PGWALK_WRLOCK, }; @@ -510,6 +602,8 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct *vma, struct mmu_notifier_range range; int ret, ref_ctr_updated = 0; struct page *page; + unsigned long page_size = PAGE_SIZE; + unsigned long page_mask = PAGE_MASK; if (WARN_ON_ONCE(!is_cow_mapping(vma->vm_flags))) return -EINVAL; @@ -528,6 +622,11 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct *vma, if (ret != 1) goto out; + if (is_vm_hugetlb_page(vma)) { + struct hstate *h = hstate_vma(vma); + page_size = huge_page_size(h); + page_mask = huge_page_mask(h); + } ret = verify_opcode(page, opcode_vaddr, &opcode); put_page(page); if (ret <= 0) @@ -547,8 +646,9 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct *vma, * unregistering. So trigger MMU notifiers now, as we won't * be able to do it under PTL. */ + const unsigned long start = vaddr & page_mask; mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, - vaddr, vaddr + PAGE_SIZE); + start, start + page_size); mmu_notifier_invalidate_range_start(&range); } @@ -830,8 +930,16 @@ static int __copy_insn(struct address_space *mapping, struct file *filp, */ if (mapping->a_ops->read_folio) page = read_mapping_page(mapping, offset >> PAGE_SHIFT, filp); - else + else if (!is_file_hugepages(filp)) page = shmem_read_mapping_page(mapping, offset >> PAGE_SHIFT); + else { + struct hstate *h = hstate_file(filp); + unsigned long mask = huge_page_mask(h); + page = find_get_page(mapping, (offset & mask) >> PAGE_SHIFT); + if (IS_ERR(page)) + return PTR_ERR(page); + page = nth_page(page, (offset & (huge_page_size(h) - 1)) >> PAGE_SHIFT); + } if (IS_ERR(page)) return PTR_ERR(page); @@ -1182,9 +1290,12 @@ static int __uprobe_register(struct inode *inode, loff_t offset, if (!uc->handler && !uc->ret_handler) return -EINVAL; - /* copy_insn() uses read_mapping_page() or shmem_read_mapping_page() */ + /* copy_insn() uses read_mapping_page() or shmem/hugetlbfs specific + * logic + */ if (!inode->i_mapping->a_ops->read_folio && - !shmem_mapping(inode->i_mapping)) + !shmem_mapping(inode->i_mapping) && + !hugetlbfs_mapping(inode->i_mapping)) return -EIO; /* Racy, just to catch the obvious mistakes */ if (offset > i_size_read(inode)) -- Guillaume Morin