Received: by 2002:a05:7412:cfc7:b0:fc:a2b0:25d7 with SMTP id by7csp900527rdb; Sun, 18 Feb 2024 18:30:08 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCX734CNPvbyP0lOg/LK+SECMgA6HxJk63TfK32raLcht5I4jxaTvedOwyhglCdbiwOcsejC8yUn2DQ1ngasnbKRtT+SYQcWAbnhEDoSWw== X-Google-Smtp-Source: AGHT+IFdSdxiMCu3lhvhsvSR78jEQ73hrImOeUhqglGzR/5lJ3FX6szIQMYnLq69yVDtC5+1fCy2 X-Received: by 2002:a17:906:cec7:b0:a3e:73c9:7acc with SMTP id si7-20020a170906cec700b00a3e73c97accmr1307209ejb.33.1708309808242; Sun, 18 Feb 2024 18:30:08 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708309808; cv=pass; d=google.com; s=arc-20160816; b=mUJyHeNZqhws0S/yn9f4PW8MnY9VG5XSU5cBrY0lh091iBizkfKLNKdd3TTubpeVo/ 9K18j8W8CBAwSOkuulcWt3CeKN8FAJXTP6tqnH3g/UHuf//3Eo1f6eYGlekZA2ALSZwa pR1ruvtpsJvUorXsEanBav3PADcel4zNr8mZ9GygJ61mtg2K1WBcrGnVmfaz34VrFPRT YJZ3TA3KODnFUQpgLxT+nfUAH5lt8xShL1BfRzCrJNUQa54i5PAERffxlYBYElzhZjhQ JiahL36bZrlgrSD6fu/6wfx1f87HYUpYeZrZdq2HuXCEjiBxJbNssdMRFR/Y3AXFForv ZI+A== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:dkim-signature:message-id; bh=GHmoOh1IrDA7CaByVYCgd3LNUvnFPAbIqxcCXCEwQqw=; fh=bP1D6ltsfpQIczCLIgOpCJ3TAAEupMyM4j7LjBt3XWI=; b=p6RvccoQ6L68ZzlyhDxickdNjHJaxDngCek/aXWvQeXAaKutYT9gc6fhtwAXUjFRJb Pnt8kuUyRLvf/HK8YidKdrdI/Ct6lztOCU7znaRMdQyke8EeBUpmG+9wt0rDPYZnLN+y my8A8bDGJK9cj6wLogZ6Xlzwg6rIbaFXfY4zh1lCZhDX2sOWGsxm+BgPFZyJGtvKmplP uHb+f7W9sNnavTZlpdtS+6/BRfqRnfLBAvbP7hYtWA/ZrFrOuE3yxCkTKdafSebWRN4a wMLFmu1bKFZZt5Z2EYga/Z+dzHlKe4sb0VUyKw6NIbWZla7RlZTGXBXHv1vDStUj5Fro 8PBA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=Jq8LfAed; arc=pass (i=1 spf=pass spfdomain=linux.dev dkim=pass dkdomain=linux.dev dmarc=pass fromdomain=linux.dev); spf=pass (google.com: domain of linux-kernel+bounces-70653-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-70653-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id m5-20020a1709062ac500b00a3e0d508bb8si1907630eje.609.2024.02.18.18.30.08 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 18 Feb 2024 18:30:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-70653-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=Jq8LfAed; arc=pass (i=1 spf=pass spfdomain=linux.dev dkim=pass dkdomain=linux.dev dmarc=pass fromdomain=linux.dev); spf=pass (google.com: domain of linux-kernel+bounces-70653-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-70653-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id B57971F215E2 for ; Mon, 19 Feb 2024 02:30:07 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E4AD920F1; Mon, 19 Feb 2024 02:30:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="Jq8LfAed" Received: from out-178.mta0.migadu.com (out-178.mta0.migadu.com [91.218.175.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B21D41FB3 for ; Mon, 19 Feb 2024 02:29:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.178 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708309799; cv=none; b=QWEd9a2E/BjPiYfreYelt+j7AgN04ag92PozjeJM3Z60wcY2fdQOqAphwWkfOPczpBlWuPt4gTDlK6Lt+/sTjuLCZsLumhhFwYN71FJOpJ1tBsFGq/KQQ8krhazzoJV6/nQIGYl2ClDOr0gGnBkydj5VYafVcnjKAFy5SegAfy8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708309799; c=relaxed/simple; bh=uk2eW1tdl6D7q0asiI2LOIYJgx+vTB4PuuKhUDq3u3U=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=QxdA6VncXCR5J8cR2cQpeW8rOG2LNF+i4EbIYvsnib/J88dAuLknCqfLI6MT6hXk9jvPx/+vX0YMjvFr+Dp2UR9/zUGlrx5e+FogVi90hwIMSOCm8qMJRVPGsO1SPX2Hhp+NlQ3V+I98uv5L3ZJRvAk8nXG2+jahu/x/oGdLQ2g= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=Jq8LfAed; arc=none smtp.client-ip=91.218.175.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Message-ID: <9f76ce23-67b1-ccbe-a722-1db0e8f0a408@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1708309793; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GHmoOh1IrDA7CaByVYCgd3LNUvnFPAbIqxcCXCEwQqw=; b=Jq8LfAednD7i0qT8Jqvy/Tn7oHq1r+m5yoM7jvnCHcnkJALFSXfrYTpmAFG51zqWhBFD3f pBr5v2/cE2OA6TmYqbLEO+UcLX3gKQUSi+OsgRn9iSkwH+KGNtIvN6dNlsdiJgomd/Sw+y kvW7FlDIycy3VddRWVDoKNONi6bSGh0= Date: Mon, 19 Feb 2024 10:29:40 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Subject: Re: [PATCH] mm/mmap: convert all mas except mas_detach to vma iterator To: akpm@linux-foundation.org Cc: Liam.Howlett@Oracle.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, vbabka@suse.cz, lstoakes@gmail.com, surenb@google.com References: <20240218023155.2684469-1-yajun.deng@linux.dev> Content-Language: en-US X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yajun Deng In-Reply-To: <20240218023155.2684469-1-yajun.deng@linux.dev> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT Cc:  Vlastimil, Lorenzo,Suren On 2024/2/18 10:31, Yajun Deng wrote: > There are two types of iterators mas and vmi in the current code. If the > maple tree comes from the mm struct, we can use vma iterator. Avoid using > mas directly. > > Leave the mt_detach tree keep using mas, as it doesn't come from the mm > struct. > > Convert all mas except mas_detach to vma iterator. And introduce > vma_iter_area_{lowest, highest} helper functions for use vma interator. > > Signed-off-by: Yajun Deng > --- > mm/internal.h | 12 ++++++ > mm/mmap.c | 114 +++++++++++++++++++++++++------------------------- > 2 files changed, 69 insertions(+), 57 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index 1e29c5821a1d..6117e63a7acc 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -1147,6 +1147,18 @@ static inline void vma_iter_config(struct vma_iterator *vmi, > __mas_set_range(&vmi->mas, index, last - 1); > } > > +static inline int vma_iter_area_lowest(struct vma_iterator *vmi, unsigned long min, > + unsigned long max, unsigned long size) > +{ > + return mas_empty_area(&vmi->mas, min, max - 1, size); > +} > + > +static inline int vma_iter_area_highest(struct vma_iterator *vmi, unsigned long min, > + unsigned long max, unsigned long size) > +{ > + return mas_empty_area_rev(&vmi->mas, min, max - 1, size); > +} > + > /* > * VMA Iterator functions shared between nommu and mmap > */ > diff --git a/mm/mmap.c b/mm/mmap.c > index 7a9d2895a1bd..2fc38bf0d1aa 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -1104,21 +1104,21 @@ static struct anon_vma *reusable_anon_vma(struct vm_area_struct *old, struct vm_ > */ > struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *vma) > { > - MA_STATE(mas, &vma->vm_mm->mm_mt, vma->vm_end, vma->vm_end); > struct anon_vma *anon_vma = NULL; > struct vm_area_struct *prev, *next; > + VMA_ITERATOR(vmi, vma->vm_mm, vma->vm_end); > > /* Try next first. */ > - next = mas_walk(&mas); > + next = vma_iter_load(&vmi); > if (next) { > anon_vma = reusable_anon_vma(next, vma, next); > if (anon_vma) > return anon_vma; > } > > - prev = mas_prev(&mas, 0); > + prev = vma_prev(&vmi); > VM_BUG_ON_VMA(prev != vma, vma); > - prev = mas_prev(&mas, 0); > + prev = vma_prev(&vmi); > /* Try prev next. */ > if (prev) > anon_vma = reusable_anon_vma(prev, prev, vma); > @@ -1566,8 +1566,7 @@ static unsigned long unmapped_area(struct vm_unmapped_area_info *info) > unsigned long length, gap; > unsigned long low_limit, high_limit; > struct vm_area_struct *tmp; > - > - MA_STATE(mas, ¤t->mm->mm_mt, 0, 0); > + VMA_ITERATOR(vmi, current->mm, 0); > > /* Adjust search length to account for worst case alignment overhead */ > length = info->length + info->align_mask; > @@ -1579,23 +1578,23 @@ static unsigned long unmapped_area(struct vm_unmapped_area_info *info) > low_limit = mmap_min_addr; > high_limit = info->high_limit; > retry: > - if (mas_empty_area(&mas, low_limit, high_limit - 1, length)) > + if (vma_iter_area_lowest(&vmi, low_limit, high_limit, length)) > return -ENOMEM; > > - gap = mas.index; > + gap = vma_iter_addr(&vmi); > gap += (info->align_offset - gap) & info->align_mask; > - tmp = mas_next(&mas, ULONG_MAX); > + tmp = vma_next(&vmi); > if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if possible */ > if (vm_start_gap(tmp) < gap + length - 1) { > low_limit = tmp->vm_end; > - mas_reset(&mas); > + mas_reset(&vmi.mas); > goto retry; > } > } else { > - tmp = mas_prev(&mas, 0); > + tmp = vma_prev(&vmi); > if (tmp && vm_end_gap(tmp) > gap) { > low_limit = vm_end_gap(tmp); > - mas_reset(&mas); > + mas_reset(&vmi.mas); > goto retry; > } > } > @@ -1618,8 +1617,8 @@ static unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) > unsigned long length, gap, gap_end; > unsigned long low_limit, high_limit; > struct vm_area_struct *tmp; > + VMA_ITERATOR(vmi, current->mm, 0); > > - MA_STATE(mas, ¤t->mm->mm_mt, 0, 0); > /* Adjust search length to account for worst case alignment overhead */ > length = info->length + info->align_mask; > if (length < info->length) > @@ -1630,24 +1629,24 @@ static unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) > low_limit = mmap_min_addr; > high_limit = info->high_limit; > retry: > - if (mas_empty_area_rev(&mas, low_limit, high_limit - 1, length)) > + if (vma_iter_area_highest(&vmi, low_limit, high_limit, length)) > return -ENOMEM; > > - gap = mas.last + 1 - info->length; > + gap = vma_iter_end(&vmi) - info->length; > gap -= (gap - info->align_offset) & info->align_mask; > - gap_end = mas.last; > - tmp = mas_next(&mas, ULONG_MAX); > + gap_end = vma_iter_end(&vmi); > + tmp = vma_next(&vmi); > if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if possible */ > - if (vm_start_gap(tmp) <= gap_end) { > + if (vm_start_gap(tmp) < gap_end) { > high_limit = vm_start_gap(tmp); > - mas_reset(&mas); > + mas_reset(&vmi.mas); > goto retry; > } > } else { > - tmp = mas_prev(&mas, 0); > + tmp = vma_prev(&vmi); > if (tmp && vm_end_gap(tmp) > gap) { > high_limit = tmp->vm_start; > - mas_reset(&mas); > + mas_reset(&vmi.mas); > goto retry; > } > } > @@ -1900,12 +1899,12 @@ find_vma_prev(struct mm_struct *mm, unsigned long addr, > struct vm_area_struct **pprev) > { > struct vm_area_struct *vma; > - MA_STATE(mas, &mm->mm_mt, addr, addr); > + VMA_ITERATOR(vmi, mm, addr); > > - vma = mas_walk(&mas); > - *pprev = mas_prev(&mas, 0); > + vma = vma_iter_load(&vmi); > + *pprev = vma_prev(&vmi); > if (!vma) > - vma = mas_next(&mas, ULONG_MAX); > + vma = vma_next(&vmi); > return vma; > } > > @@ -1959,11 +1958,12 @@ static int expand_upwards(struct vm_area_struct *vma, unsigned long address) > struct vm_area_struct *next; > unsigned long gap_addr; > int error = 0; > - MA_STATE(mas, &mm->mm_mt, vma->vm_start, address); > + VMA_ITERATOR(vmi, mm, 0); > > if (!(vma->vm_flags & VM_GROWSUP)) > return -EFAULT; > > + vma_iter_config(&vmi, vma->vm_start, address); > /* Guard against exceeding limits of the address space. */ > address &= PAGE_MASK; > if (address >= (TASK_SIZE & PAGE_MASK)) > @@ -1985,15 +1985,15 @@ static int expand_upwards(struct vm_area_struct *vma, unsigned long address) > } > > if (next) > - mas_prev_range(&mas, address); > + mas_prev_range(&vmi.mas, address); > > - __mas_set_range(&mas, vma->vm_start, address - 1); > - if (mas_preallocate(&mas, vma, GFP_KERNEL)) > + vma_iter_config(&vmi, vma->vm_start, address); > + if (vma_iter_prealloc(&vmi, vma)) > return -ENOMEM; > > /* We must make sure the anon_vma is allocated. */ > if (unlikely(anon_vma_prepare(vma))) { > - mas_destroy(&mas); > + vma_iter_free(&vmi); > return -ENOMEM; > } > > @@ -2033,7 +2033,7 @@ static int expand_upwards(struct vm_area_struct *vma, unsigned long address) > anon_vma_interval_tree_pre_update_vma(vma); > vma->vm_end = address; > /* Overwrite old entry in mtree. */ > - mas_store_prealloc(&mas, vma); > + vma_iter_store(&vmi, vma); > anon_vma_interval_tree_post_update_vma(vma); > spin_unlock(&mm->page_table_lock); > > @@ -2042,7 +2042,7 @@ static int expand_upwards(struct vm_area_struct *vma, unsigned long address) > } > } > anon_vma_unlock_write(vma->anon_vma); > - mas_destroy(&mas); > + vma_iter_free(&vmi); > validate_mm(mm); > return error; > } > @@ -2055,9 +2055,9 @@ static int expand_upwards(struct vm_area_struct *vma, unsigned long address) > int expand_downwards(struct vm_area_struct *vma, unsigned long address) > { > struct mm_struct *mm = vma->vm_mm; > - MA_STATE(mas, &mm->mm_mt, vma->vm_start, vma->vm_start); > struct vm_area_struct *prev; > int error = 0; > + VMA_ITERATOR(vmi, mm, vma->vm_start); > > if (!(vma->vm_flags & VM_GROWSDOWN)) > return -EFAULT; > @@ -2067,7 +2067,7 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address) > return -EPERM; > > /* Enforce stack_guard_gap */ > - prev = mas_prev(&mas, 0); > + prev = vma_prev(&vmi); > /* Check that both stack segments have the same anon_vma? */ > if (prev) { > if (!(prev->vm_flags & VM_GROWSDOWN) && > @@ -2077,15 +2077,15 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address) > } > > if (prev) > - mas_next_range(&mas, vma->vm_start); > + mas_next_range(&vmi.mas, vma->vm_start); > > - __mas_set_range(&mas, address, vma->vm_end - 1); > - if (mas_preallocate(&mas, vma, GFP_KERNEL)) > + vma_iter_config(&vmi, address, vma->vm_end); > + if (vma_iter_prealloc(&vmi, vma)) > return -ENOMEM; > > /* We must make sure the anon_vma is allocated. */ > if (unlikely(anon_vma_prepare(vma))) { > - mas_destroy(&mas); > + vma_iter_free(&vmi); > return -ENOMEM; > } > > @@ -2126,7 +2126,7 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address) > vma->vm_start = address; > vma->vm_pgoff -= grow; > /* Overwrite old entry in mtree. */ > - mas_store_prealloc(&mas, vma); > + vma_iter_store(&vmi, vma); > anon_vma_interval_tree_post_update_vma(vma); > spin_unlock(&mm->page_table_lock); > > @@ -2135,7 +2135,7 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address) > } > } > anon_vma_unlock_write(vma->anon_vma); > - mas_destroy(&mas); > + vma_iter_free(&vmi); > validate_mm(mm); > return error; > } > @@ -3233,7 +3233,7 @@ void exit_mmap(struct mm_struct *mm) > struct mmu_gather tlb; > struct vm_area_struct *vma; > unsigned long nr_accounted = 0; > - MA_STATE(mas, &mm->mm_mt, 0, 0); > + VMA_ITERATOR(vmi, mm, 0); > int count = 0; > > /* mm's last user has gone, and its about to be pulled down */ > @@ -3242,7 +3242,7 @@ void exit_mmap(struct mm_struct *mm) > mmap_read_lock(mm); > arch_exit_mmap(mm); > > - vma = mas_find(&mas, ULONG_MAX); > + vma = vma_next(&vmi); > if (!vma || unlikely(xa_is_zero(vma))) { > /* Can happen if dup_mmap() received an OOM */ > mmap_read_unlock(mm); > @@ -3255,7 +3255,7 @@ void exit_mmap(struct mm_struct *mm) > tlb_gather_mmu_fullmm(&tlb, mm); > /* update_hiwater_rss(mm) here? but nobody should be looking */ > /* Use ULONG_MAX here to ensure all VMAs in the mm are unmapped */ > - unmap_vmas(&tlb, &mas, vma, 0, ULONG_MAX, ULONG_MAX, false); > + unmap_vmas(&tlb, &vmi.mas, vma, 0, ULONG_MAX, ULONG_MAX, false); > mmap_read_unlock(mm); > > /* > @@ -3265,8 +3265,8 @@ void exit_mmap(struct mm_struct *mm) > set_bit(MMF_OOM_SKIP, &mm->flags); > mmap_write_lock(mm); > mt_clear_in_rcu(&mm->mm_mt); > - mas_set(&mas, vma->vm_end); > - free_pgtables(&tlb, &mas, vma, FIRST_USER_ADDRESS, > + vma_iter_set(&vmi, vma->vm_end); > + free_pgtables(&tlb, &vmi.mas, vma, FIRST_USER_ADDRESS, > USER_PGTABLES_CEILING, true); > tlb_finish_mmu(&tlb); > > @@ -3275,14 +3275,14 @@ void exit_mmap(struct mm_struct *mm) > * enabled, without holding any MM locks besides the unreachable > * mmap_write_lock. > */ > - mas_set(&mas, vma->vm_end); > + vma_iter_set(&vmi, vma->vm_end); > do { > if (vma->vm_flags & VM_ACCOUNT) > nr_accounted += vma_pages(vma); > remove_vma(vma, true); > count++; > cond_resched(); > - vma = mas_find(&mas, ULONG_MAX); > + vma = vma_next(&vmi); > } while (vma && likely(!xa_is_zero(vma))); > > BUG_ON(count != mm->map_count); > @@ -3704,7 +3704,7 @@ int mm_take_all_locks(struct mm_struct *mm) > { > struct vm_area_struct *vma; > struct anon_vma_chain *avc; > - MA_STATE(mas, &mm->mm_mt, 0, 0); > + VMA_ITERATOR(vmi, mm, 0); > > mmap_assert_write_locked(mm); > > @@ -3716,14 +3716,14 @@ int mm_take_all_locks(struct mm_struct *mm) > * being written to until mmap_write_unlock() or mmap_write_downgrade() > * is reached. > */ > - mas_for_each(&mas, vma, ULONG_MAX) { > + for_each_vma(vmi, vma) { > if (signal_pending(current)) > goto out_unlock; > vma_start_write(vma); > } > > - mas_set(&mas, 0); > - mas_for_each(&mas, vma, ULONG_MAX) { > + vma_iter_init(&vmi, mm, 0); > + for_each_vma(vmi, vma) { > if (signal_pending(current)) > goto out_unlock; > if (vma->vm_file && vma->vm_file->f_mapping && > @@ -3731,8 +3731,8 @@ int mm_take_all_locks(struct mm_struct *mm) > vm_lock_mapping(mm, vma->vm_file->f_mapping); > } > > - mas_set(&mas, 0); > - mas_for_each(&mas, vma, ULONG_MAX) { > + vma_iter_init(&vmi, mm, 0); > + for_each_vma(vmi, vma) { > if (signal_pending(current)) > goto out_unlock; > if (vma->vm_file && vma->vm_file->f_mapping && > @@ -3740,8 +3740,8 @@ int mm_take_all_locks(struct mm_struct *mm) > vm_lock_mapping(mm, vma->vm_file->f_mapping); > } > > - mas_set(&mas, 0); > - mas_for_each(&mas, vma, ULONG_MAX) { > + vma_iter_init(&vmi, mm, 0); > + for_each_vma(vmi, vma) { > if (signal_pending(current)) > goto out_unlock; > if (vma->anon_vma) > @@ -3800,12 +3800,12 @@ void mm_drop_all_locks(struct mm_struct *mm) > { > struct vm_area_struct *vma; > struct anon_vma_chain *avc; > - MA_STATE(mas, &mm->mm_mt, 0, 0); > + VMA_ITERATOR(vmi, mm, 0); > > mmap_assert_write_locked(mm); > BUG_ON(!mutex_is_locked(&mm_all_locks_mutex)); > > - mas_for_each(&mas, vma, ULONG_MAX) { > + for_each_vma(vmi, vma) { > if (vma->anon_vma) > list_for_each_entry(avc, &vma->anon_vma_chain, same_vma) > vm_unlock_anon_vma(avc->anon_vma);