Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp1061683ybk; Wed, 20 May 2020 20:26:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJycX/b/1kAxgEPTIMd7BodboWGojK62yEnXJbA3ZEQqPCB8MQ3GZpy4pWa4Qnl8yEWbR1lj X-Received: by 2002:a17:906:4088:: with SMTP id u8mr1853353ejj.444.1590031596956; Wed, 20 May 2020 20:26:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590031596; cv=none; d=google.com; s=arc-20160816; b=osbDiElF4Lerm6IVpT76ZEYe1IyZ6jp/DORbZera2wvmyx6JCoVyltPd196ITdV1OR 2sljmSSdo117jYRAgYTYI045dHIue9E+Lll3TdLbt0MRc0iInEwoZ4qLyhnDwps59mIS aaKuQqLVYtemXHvaDo6pW9TKE8dGr0OSOWkZpcaafsbGZFD46xgnXczo/3Pud3dtlJ39 eTQyQ5CIBlx4jDfEW06BYKl1gTY8clVvLi65mlkrIhUzwkqvhWP4cd38XUKdHiQH9m4P c94lc8HSjEQh0hTQI5cZdXf+cxo0/tWDDQoGz3OfzpgWaS43eGtETUO8dPs/N7uu+fi0 XwJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:message-id :in-reply-to:date:references:subject:cc:to:from:ironport-sdr :ironport-sdr; bh=a7XW25M7XOaJJcH0vMOPE48lpFTQJBuYSdknyzfV+v8=; b=P+ITSc7FmQQZgm98QwBPcuyv+GXbFqE3cUYw/ynJNqFntGiXFCyafZhuuZ4TJII3dh Q3LFNPT3oqcRK9/kn31l6fLEftLlFqynoRKbzOGA8wlwEbggd7BwtoKGqii2f/VWEg6c ROZPGtMAYrZnrPmQHHVfmT1LYE8RVU81/fsgyXaZoj8IQMaNcBTfX1tw2p6dnNYGh7fr 5QguGzTgwUIhqvc1040zt/a3X+wDunENXSvV+n7zkMkhR9sbRmd5BfiBkFW/nCRFkCxB 0cNR7zWNDdS77He7woRHMKkY1gR0y+wm3gjh7d3XhMZd3Ah/TkbnvoC+iWFxTSdXzetH qyXg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c5si2453750edf.371.2020.05.20.20.26.14; Wed, 20 May 2020 20:26:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728180AbgEUDYp (ORCPT + 99 others); Wed, 20 May 2020 23:24:45 -0400 Received: from mga11.intel.com ([192.55.52.93]:20293 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728137AbgEUDYo (ORCPT ); Wed, 20 May 2020 23:24:44 -0400 IronPort-SDR: wMUU3qa+VcsrBcxCSvK9tC9C41uJzIiwY3a+5SIW2gyJ+aJH7mDWFRnPJwxhZQk/UBhTSlWiOY bGNqpotPODuw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2020 20:24:43 -0700 IronPort-SDR: EHV0Uh3GxnH+P10cZSaitMKhSTP5RJwQ8lctWSxz4qBGMkuqL0M/i34s3HMOmgiyoHYdVFecWb za7uF3x2u3dg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,416,1583222400"; d="scan'208";a="268484539" Received: from yhuang-dev.sh.intel.com (HELO yhuang-dev) ([10.239.159.23]) by orsmga006.jf.intel.com with ESMTP; 20 May 2020 20:24:41 -0700 From: "Huang\, Ying" To: Andrew Morton Cc: , , Daniel Jordan , Michal Hocko , Minchan Kim , Tim Chen , Hugh Dickins Subject: Re: [PATCH -V2] swap: Reduce lock contention on swap cache from swap slots allocation References: <20200520031502.175659-1-ying.huang@intel.com> <20200520195102.2343f746e88a2bec5c29ef5b@linux-foundation.org> Date: Thu, 21 May 2020 11:24:40 +0800 In-Reply-To: <20200520195102.2343f746e88a2bec5c29ef5b@linux-foundation.org> (Andrew Morton's message of "Wed, 20 May 2020 19:51:02 -0700") Message-ID: <87o8qihsw7.fsf@yhuang-dev.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Andrew Morton writes: > On Wed, 20 May 2020 11:15:02 +0800 Huang Ying wrote: > >> In some swap scalability test, it is found that there are heavy lock >> contention on swap cache even if we have split one swap cache radix >> tree per swap device to one swap cache radix tree every 64 MB trunk in >> commit 4b3ef9daa4fc ("mm/swap: split swap cache into 64MB trunks"). >> >> The reason is as follow. After the swap device becomes fragmented so >> that there's no free swap cluster, the swap device will be scanned >> linearly to find the free swap slots. swap_info_struct->cluster_next >> is the next scanning base that is shared by all CPUs. So nearby free >> swap slots will be allocated for different CPUs. The probability for >> multiple CPUs to operate on the same 64 MB trunk is high. This causes >> the lock contention on the swap cache. >> >> To solve the issue, in this patch, for SSD swap device, a percpu >> version next scanning base (cluster_next_cpu) is added. Every CPU >> will use its own per-cpu next scanning base. And after finishing >> scanning a 64MB trunk, the per-cpu scanning base will be changed to >> the beginning of another randomly selected 64MB trunk. In this way, >> the probability for multiple CPUs to operate on the same 64 MB trunk >> is reduced greatly. Thus the lock contention is reduced too. For >> HDD, because sequential access is more important for IO performance, >> the original shared next scanning base is used. >> >> To test the patch, we have run 16-process pmbench memory benchmark on >> a 2-socket server machine with 48 cores. One ram disk is configured > > What does "ram disk" mean here? Which drivers(s) are in use and backed > by what sort of memory? We use the following kernel command line memmap=48G!6G memmap=48G!68G to create 2 DRAM based /dev/pmem disks (48GB each). Then we use these ram disks as swap devices. >> as the swap device per socket. The pmbench working-set size is much >> larger than the available memory so that swapping is triggered. The >> memory read/write ratio is 80/20 and the accessing pattern is random. >> In the original implementation, the lock contention on the swap cache >> is heavy. The perf profiling data of the lock contention code path is >> as following, >> >> _raw_spin_lock_irq.add_to_swap_cache.add_to_swap.shrink_page_list: 7.91 >> _raw_spin_lock_irqsave.__remove_mapping.shrink_page_list: 7.11 >> _raw_spin_lock.swapcache_free_entries.free_swap_slot.__swap_entry_free: 2.51 >> _raw_spin_lock_irqsave.swap_cgroup_record.mem_cgroup_uncharge_swap: 1.66 >> _raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node: 1.29 >> _raw_spin_lock.free_pcppages_bulk.drain_pages_zone.drain_pages: 1.03 >> _raw_spin_lock_irq.shrink_active_list.shrink_lruvec.shrink_node: 0.93 >> >> After applying this patch, it becomes, >> >> _raw_spin_lock.swapcache_free_entries.free_swap_slot.__swap_entry_free: 3.58 >> _raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node: 2.3 >> _raw_spin_lock_irqsave.swap_cgroup_record.mem_cgroup_uncharge_swap: 2.26 >> _raw_spin_lock_irq.shrink_active_list.shrink_lruvec.shrink_node: 1.8 >> _raw_spin_lock.free_pcppages_bulk.drain_pages_zone.drain_pages: 1.19 >> >> The lock contention on the swap cache is almost eliminated. >> >> And the pmbench score increases 18.5%. The swapin throughput >> increases 18.7% from 2.96 GB/s to 3.51 GB/s. While the swapout >> throughput increases 18.5% from 2.99 GB/s to 3.54 GB/s. > > If this was backed by plain old RAM, can we assume that the performance > improvement on SSD swap is still good? We need really fast disk to show the benefit. I have tried this on 2 Intel P3600 NVMe disks. The performance improvement is only about 1%. The improvement should be better on the faster disks, such as Intel Optane disk. I will try to find some to test. > Does the ram disk actually set SWP_SOLIDSTATE? Yes. "blk_queue_flag_set(QUEUE_FLAG_NONROT, q)" is called in drivers/nvdimm/pmem.c. Best Regards, Huang, Ying