Received: by 2002:ab2:2997:0:b0:1ec:cbc4:63fb with SMTP id n23csp332675lqb; Thu, 29 Feb 2024 02:13:22 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCX1SIj+AJC0NE+kHYyhOM+vkfuOdpgAZiYMmqvUtr6fczD+8HdymGx0HhQ8RXWIjLDFQC8zhLKaFaeXyPP/gg5ViIFi8m7lBNkoGYi3Pg== X-Google-Smtp-Source: AGHT+IG+G80o/zT+q0kQEaNXAtyXHMozEtL65vviIwaPYAmlY3WMl+7z8e3U+Al/WliWcGjzctK0 X-Received: by 2002:a05:6a21:1706:b0:19c:6fb0:2b02 with SMTP id nv6-20020a056a21170600b0019c6fb02b02mr1716872pzb.61.1709201602733; Thu, 29 Feb 2024 02:13:22 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709201602; cv=pass; d=google.com; s=arc-20160816; b=lz4JMs9h80hMvzlUSq+2mk4aGh60JQ4w3WkBs2i7UoUiPtCyv6XQblvPCfXrhgOrWb SgqWrg7sDeMFBoBq4CTFfNVyDSo0EyZiCnu1D/rTcIfl/T+wgXhjPA6JpYeUEOBNjrFu PRDDo74ndKEFa/7TcskcjdEHuw5lo223KMxraOg9F0mkJEZjenfnbZEXL7j74B9WFXRW 14rbS7Q7NzVGiNe38Vrc8G9S1MELrmcx0Eu7lwYJuSS883D8Oegnzjsf0oLjrli+/jpV B36z0qrD1dTAZ1fmVCH8QXFhQT8+wL03oTDZpJAxGwLUVRg9lmNcUqI6LHBCuPjxq9+S eoAw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=MyG7advrzShaE4GD7r+5bwZTpy9MVwSz4bddaNdVD5I=; fh=bFYeJv5xzYDcEe29ZQEDNCN43SFP24EXBwJ5ilDTpww=; b=MxP3JXw8z8NOzh2JlR39ck1DlGQUEadW4GqmhqXr/4gwkAyvm7PJl5/7bbBr0A7oJ6 UyWDPg19ahgb083iCdmDCZY/yfZgPBQUsiCoqu5e5ZdTFPnMxK9rHpmYE9i2r18Y4+AV A8bIaneuDEgR/KgGAQ6XLAiYArg3rmf+VHQI9xl4FatuOWWF+Na8Lxl7Sg+5jBn/goOQ r+Jy2uS81xIuTMsaZVMBEjyaOwjMiZcFTFmOXIA76ou23kUDiUulqxHf4fYAhcP2tpt8 7YBbyX4gdCP12xcdK2VCeCh+FFVUw+Iz47Ok1KMiLZ1zg90bw0WX1/CU/5LZEiabyC9i Cm9w==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gxdlJZI4; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-86523-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-86523-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id bv65-20020a632e44000000b005e4837345a3si1058905pgb.400.2024.02.29.02.13.22 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Feb 2024 02:13:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-86523-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gxdlJZI4; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-86523-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-86523-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id B14C5282246 for ; Thu, 29 Feb 2024 10:13:01 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 69B2A64A9A; Thu, 29 Feb 2024 10:12:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="gxdlJZI4" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7063964A8A for ; Thu, 29 Feb 2024 10:12:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709201564; cv=none; b=o9qQCdmGvPOoDhTSqqGbvHn9wTz1as5udpZik3RBfx1+MH+4gh/r+luIb5bDKK2CVeOqfZpuDMKoFczX2cOSs6XSizhu9Ksr9PC4VwsI3vRg+dzEBF89IxU1Td3q1Uy+5vIWoPabDvDzd/smto9v99Q+TnG/VnRCOIXBFdIk+Pg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709201564; c=relaxed/simple; bh=l54BWmo2qBQhfU+nOETMwbUa9rPPpZylhZJ9ZNpclZM=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=nls+HVBslyB0Tx0bHf4UXP6JmkWGwManZdCZK69vThhjm8Qb2H8M7Co03gUJoLRZoqH+LoTuJ/dWUQb0AaT+x03U72kVkrZdUKrIxIJA3/uOrU+oepTgs51Memz1EPb9IqxCEFHLnkHduHR8GnN/oEq7T6XIf2+xx7TPLbLN53s= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=gxdlJZI4; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709201560; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=MyG7advrzShaE4GD7r+5bwZTpy9MVwSz4bddaNdVD5I=; b=gxdlJZI4n09dw22phMQnCViPkY0CPQ5OR7Z1dnldkjp83cQRxuGc52VWVGq+NLaIYduVxy C7FJ/hbuGVmkJWZtN8S7r2tAdpcmT4GVc+AElJIV3YlUa6qJh1bVU22pLBA+9IJjlOpsiE 6C5+gyU4WrwLYF9O4fa+XZRr1luYPUs= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-395-g2_kueaYMjGjcrktKfdRQQ-1; Thu, 29 Feb 2024 05:12:35 -0500 X-MC-Unique: g2_kueaYMjGjcrktKfdRQQ-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7563F3C40B51; Thu, 29 Feb 2024 10:12:34 +0000 (UTC) Received: from localhost (unknown [10.72.116.6]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 729CD2166B33; Thu, 29 Feb 2024 10:12:31 +0000 (UTC) Date: Thu, 29 Feb 2024 18:12:00 +0800 From: Baoquan He To: rulinhuang Cc: urezki@gmail.com, akpm@linux-foundation.org, colin.king@intel.com, hch@infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, lstoakes@gmail.com, tianyou.li@intel.com, tim.c.chen@intel.com, wangyang.guo@intel.com, zhiguo.zhou@intel.com Subject: Re: [PATCH v6] mm/vmalloc: lock contention optimization under multi-threading Message-ID: References: <20240229082611.4104839-1-rulin.huang@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240229082611.4104839-1-rulin.huang@intel.com> X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6 Hi Rulin, Thanks for the great work and v6, some concerns, please see inline comments. On 02/29/24 at 12:26am, rulinhuang wrote: > When allocating a new memory area where the mapping address range is > known, it is observed that the vmap_node->busy.lock is acquired twice. > > The first acquisition occurs in the alloc_vmap_area() function when > inserting the vm area into the vm mapping red-black tree. The second > acquisition occurs in the setup_vmalloc_vm() function when updating the > properties of the vm, such as flags and address, etc. > > Combine these two operations together in alloc_vmap_area(), which > improves scalability when the vmap_node->busy.lock is contended. > By doing so, the need to acquire the lock twice can also be eliminated > to once. > > With the above change, tested on intel sapphire rapids > platform(224 vcpu), a 4% performance improvement is > gained on stress-ng/pthread(https://github.com/ColinIanKing/stress-ng), > which is the stress test of thread creations. > > Reviewed-by: Uladzislau Rezki > Reviewed-by: Baoquan He > Reviewed-by: "Chen, Tim C" > Reviewed-by: "King, Colin" We possibly need remove these reviewers' tags when new code change is taken so that people check and add Acked-by or Reviewed-by again if then agree, or add new comments if any concern. > Signed-off-by: rulinhuang > --- > V1 -> V2: Avoided the partial initialization issue of vm and > separated insert_vmap_area() from alloc_vmap_area() > V2 -> V3: Rebased on 6.8-rc5 > V3 -> V4: Rebased on mm-unstable branch > V4 -> V5: cancel the split of alloc_vmap_area() > and keep insert_vmap_area() > V5 -> V6: add bug_on > --- > mm/vmalloc.c | 132 +++++++++++++++++++++++++-------------------------- > 1 file changed, 64 insertions(+), 68 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index 25a8df497255..5ae028b0d58d 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -1841,15 +1841,66 @@ node_alloc(unsigned long size, unsigned long align, > return va; > } > > +/*** Per cpu kva allocator ***/ > + > +/* > + * vmap space is limited especially on 32 bit architectures. Ensure there is > + * room for at least 16 percpu vmap blocks per CPU. > + */ > +/* > + * If we had a constant VMALLOC_START and VMALLOC_END, we'd like to be able > + * to #define VMALLOC_SPACE (VMALLOC_END-VMALLOC_START). Guess > + * instead (we just need a rough idea) > + */ > +#if BITS_PER_LONG == 32 > +#define VMALLOC_SPACE (128UL*1024*1024) > +#else > +#define VMALLOC_SPACE (128UL*1024*1024*1024) > +#endif > + > +#define VMALLOC_PAGES (VMALLOC_SPACE / PAGE_SIZE) > +#define VMAP_MAX_ALLOC BITS_PER_LONG /* 256K with 4K pages */ > +#define VMAP_BBMAP_BITS_MAX 1024 /* 4MB with 4K pages */ > +#define VMAP_BBMAP_BITS_MIN (VMAP_MAX_ALLOC*2) > +#define VMAP_MIN(x, y) ((x) < (y) ? (x) : (y)) /* can't use min() */ > +#define VMAP_MAX(x, y) ((x) > (y) ? (x) : (y)) /* can't use max() */ > +#define VMAP_BBMAP_BITS \ > + VMAP_MIN(VMAP_BBMAP_BITS_MAX, \ > + VMAP_MAX(VMAP_BBMAP_BITS_MIN, \ > + VMALLOC_PAGES / roundup_pow_of_two(NR_CPUS) / 16)) > + > +#define VMAP_BLOCK_SIZE (VMAP_BBMAP_BITS * PAGE_SIZE) > + > +/* > + * Purge threshold to prevent overeager purging of fragmented blocks for > + * regular operations: Purge if vb->free is less than 1/4 of the capacity. > + */ > +#define VMAP_PURGE_THRESHOLD (VMAP_BBMAP_BITS / 4) > + > +#define VMAP_RAM 0x1 /* indicates vm_map_ram area*/ > +#define VMAP_BLOCK 0x2 /* mark out the vmap_block sub-type*/ > +#define VMAP_FLAGS_MASK 0x3 These code moving is made because we need check VMAP_RAM in advance. We may need move all those data structures and basic helpers related to per cpu kva allocator up too to along with these macros, just as the newly introduced vmap_node does. If that's agreed, better be done in a separate patch. My personal opinion. Not sure if Uladzislau has different thoughts. Other than this, the overall looks good to me. > + > +static inline void setup_vmalloc_vm(struct vm_struct *vm, > + struct vmap_area *va, unsigned long flags, const void *caller) > +{ > + vm->flags = flags; > + vm->addr = (void *)va->va_start; > + vm->size = va->va_end - va->va_start; > + vm->caller = caller; > + va->vm = vm; > +} > + > /* > * Allocate a region of KVA of the specified size and alignment, within the > - * vstart and vend. > + * vstart and vend. If vm is passed in, the two will also be bound. > */ > static struct vmap_area *alloc_vmap_area(unsigned long size, > unsigned long align, > unsigned long vstart, unsigned long vend, > int node, gfp_t gfp_mask, > - unsigned long va_flags) > + unsigned long va_flags, struct vm_struct *vm, > + unsigned long flags, const void *caller) > { > struct vmap_node *vn; > struct vmap_area *va; > @@ -1912,6 +1963,11 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, > va->vm = NULL; > va->flags = (va_flags | vn_id); > > + if (vm) { > + BUG_ON(va_flags & VMAP_RAM); > + setup_vmalloc_vm(vm, va, flags, caller); > + } > + > vn = addr_to_node(va->va_start); > > spin_lock(&vn->busy.lock); > @@ -2325,46 +2381,6 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr) > return NULL; > } > > -/*** Per cpu kva allocator ***/ > - > -/* > - * vmap space is limited especially on 32 bit architectures. Ensure there is > - * room for at least 16 percpu vmap blocks per CPU. > - */ > -/* > - * If we had a constant VMALLOC_START and VMALLOC_END, we'd like to be able > - * to #define VMALLOC_SPACE (VMALLOC_END-VMALLOC_START). Guess > - * instead (we just need a rough idea) > - */ > -#if BITS_PER_LONG == 32 > -#define VMALLOC_SPACE (128UL*1024*1024) > -#else > -#define VMALLOC_SPACE (128UL*1024*1024*1024) > -#endif > - > -#define VMALLOC_PAGES (VMALLOC_SPACE / PAGE_SIZE) > -#define VMAP_MAX_ALLOC BITS_PER_LONG /* 256K with 4K pages */ > -#define VMAP_BBMAP_BITS_MAX 1024 /* 4MB with 4K pages */ > -#define VMAP_BBMAP_BITS_MIN (VMAP_MAX_ALLOC*2) > -#define VMAP_MIN(x, y) ((x) < (y) ? (x) : (y)) /* can't use min() */ > -#define VMAP_MAX(x, y) ((x) > (y) ? (x) : (y)) /* can't use max() */ > -#define VMAP_BBMAP_BITS \ > - VMAP_MIN(VMAP_BBMAP_BITS_MAX, \ > - VMAP_MAX(VMAP_BBMAP_BITS_MIN, \ > - VMALLOC_PAGES / roundup_pow_of_two(NR_CPUS) / 16)) > - > -#define VMAP_BLOCK_SIZE (VMAP_BBMAP_BITS * PAGE_SIZE) > - > -/* > - * Purge threshold to prevent overeager purging of fragmented blocks for > - * regular operations: Purge if vb->free is less than 1/4 of the capacity. > - */ > -#define VMAP_PURGE_THRESHOLD (VMAP_BBMAP_BITS / 4) > - > -#define VMAP_RAM 0x1 /* indicates vm_map_ram area*/ > -#define VMAP_BLOCK 0x2 /* mark out the vmap_block sub-type*/ > -#define VMAP_FLAGS_MASK 0x3 > - > struct vmap_block_queue { > spinlock_t lock; > struct list_head free; > @@ -2486,7 +2502,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) > va = alloc_vmap_area(VMAP_BLOCK_SIZE, VMAP_BLOCK_SIZE, > VMALLOC_START, VMALLOC_END, > node, gfp_mask, > - VMAP_RAM|VMAP_BLOCK); > + VMAP_RAM|VMAP_BLOCK, NULL, > + 0, NULL); > if (IS_ERR(va)) { > kfree(vb); > return ERR_CAST(va); > @@ -2843,7 +2860,8 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node) > struct vmap_area *va; > va = alloc_vmap_area(size, PAGE_SIZE, > VMALLOC_START, VMALLOC_END, > - node, GFP_KERNEL, VMAP_RAM); > + node, GFP_KERNEL, VMAP_RAM, > + NULL, 0, NULL); > if (IS_ERR(va)) > return NULL; > > @@ -2946,26 +2964,6 @@ void __init vm_area_register_early(struct vm_struct *vm, size_t align) > kasan_populate_early_vm_area_shadow(vm->addr, vm->size); > } > > -static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, > - struct vmap_area *va, unsigned long flags, const void *caller) > -{ > - vm->flags = flags; > - vm->addr = (void *)va->va_start; > - vm->size = va->va_end - va->va_start; > - vm->caller = caller; > - va->vm = vm; > -} > - > -static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va, > - unsigned long flags, const void *caller) > -{ > - struct vmap_node *vn = addr_to_node(va->va_start); > - > - spin_lock(&vn->busy.lock); > - setup_vmalloc_vm_locked(vm, va, flags, caller); > - spin_unlock(&vn->busy.lock); > -} > - > static void clear_vm_uninitialized_flag(struct vm_struct *vm) > { > /* > @@ -3002,14 +3000,12 @@ static struct vm_struct *__get_vm_area_node(unsigned long size, > if (!(flags & VM_NO_GUARD)) > size += PAGE_SIZE; > > - va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0); > + va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0, area, flags, caller); > if (IS_ERR(va)) { > kfree(area); > return NULL; > } > > - setup_vmalloc_vm(area, va, flags, caller); > - > /* > * Mark pages for non-VM_ALLOC mappings as accessible. Do it now as a > * best-effort approach, as they can be mapped outside of vmalloc code. > @@ -4584,7 +4580,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, > > spin_lock(&vn->busy.lock); > insert_vmap_area(vas[area], &vn->busy.root, &vn->busy.head); > - setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC, > + setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC, > pcpu_get_vm_areas); > spin_unlock(&vn->busy.lock); > } > > base-commit: 7e6ae2db7f319bf9613ec6db8fa3c9bc1de1b346 > -- > 2.43.0 >