Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp3063166imm; Fri, 19 Oct 2018 04:35:57 -0700 (PDT) X-Google-Smtp-Source: ACcGV617aC4XDztspzKJomr+vMc+ZRwrmYWLrj1hgl2u4DmEC53RlLhe320w7jG4RoH5vkuwM4zl X-Received: by 2002:a63:441f:: with SMTP id r31-v6mr32273417pga.60.1539948957427; Fri, 19 Oct 2018 04:35:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539948957; cv=none; d=google.com; s=arc-20160816; b=FlJ8/KW8Cd/lFdWYEplbYSCSQAGwYHkD/PQEjnOgkgsOAD8GhoX99EIzua+dIAWfYp CxJfN+bm1sdN4FC6ZkDtu2EcHdr80FJDXxC0JIBI/kYO4TOXTZ/XfeZOp2UhyOFYWDy4 l8fUpRf3Hl/bdB2VhU+HOVk29R35IZXz0/TIPbfPCEd0R3EiUqAVjhm98CYDCPwIhCO4 pk+CgXTziN5HT8AZ9r3iR3LA9H8LP3mljfSNWVrAjvuUtVjSuCjxPoExPblvMNZSPoL7 FarXvyhiGtIX5EuhOItNMDEdxDcGNLjRB4FnhB1yplGTZ0jVP0LqxtCsggtdb/U6TvRm oJKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:autocrypt:openpgp:from:references:cc:to:subject; bh=Mb4QRdmYr+c2kmG0JyzEDZNufur5qLSg1agN3W9KsAs=; b=fauqAROMImZ3Lx6Q3X3OTAxAT1NXO+xZSMjLG1I5IJHKHRUdD/unwDf92pZfF7xBM6 9V/EmwBZWopMr6N+L2V3lp3vJuuCYckjDeKkJrt0vJNe64ZdOmyCFzm7/we6K3XwD/2j Uz2YsfDFpeGg+nav08CfGbB8BQXhmmeB7fQIm5pzaDs6PH6MQK5VTnirDtwkPeEk4XOk LjiMSeWYjJhm/o2KKpCyerJH8zlZcjNaOl99Ez1tr7WdbAoG8IPrjYSWKPd3MgN5NwjH G3rb/l09MiH50xmMduWnF8eauXTXcXtfC32tT6NOJy11BtjkPISo752AtHhT2YWzE+hC 9+iA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q26-v6si22501706pgv.436.2018.10.19.04.35.41; Fri, 19 Oct 2018 04:35:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727291AbeJSTk7 (ORCPT + 99 others); Fri, 19 Oct 2018 15:40:59 -0400 Received: from mx2.suse.de ([195.135.220.15]:35722 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727039AbeJSTk7 (ORCPT ); Fri, 19 Oct 2018 15:40:59 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id B2A97AC6C; Fri, 19 Oct 2018 11:35:13 +0000 (UTC) Subject: Re: [RFC PATCH v2 0/8] lru_lock scalability and SMP list functions To: Daniel Jordan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org Cc: aaron.lu@intel.com, ak@linux.intel.com, akpm@linux-foundation.org, dave.dice@oracle.com, dave.hansen@linux.intel.com, hannes@cmpxchg.org, levyossi@icloud.com, ldufour@linux.vnet.ibm.com, mgorman@techsingularity.net, mhocko@kernel.org, Pavel.Tatashin@microsoft.com, steven.sistare@oracle.com, tim.c.chen@intel.com, vdavydov.dev@gmail.com, ying.huang@intel.com References: <20180911004240.4758-1-daniel.m.jordan@oracle.com> From: Vlastimil Babka Openpgp: preference=signencrypt Autocrypt: addr=vbabka@suse.cz; prefer-encrypt=mutual; keydata= xsFNBFZdmxYBEADsw/SiUSjB0dM+vSh95UkgcHjzEVBlby/Fg+g42O7LAEkCYXi/vvq31JTB KxRWDHX0R2tgpFDXHnzZcQywawu8eSq0LxzxFNYMvtB7sV1pxYwej2qx9B75qW2plBs+7+YB 87tMFA+u+L4Z5xAzIimfLD5EKC56kJ1CsXlM8S/LHcmdD9Ctkn3trYDNnat0eoAcfPIP2OZ+ 9oe9IF/R28zmh0ifLXyJQQz5ofdj4bPf8ecEW0rhcqHfTD8k4yK0xxt3xW+6Exqp9n9bydiy tcSAw/TahjW6yrA+6JhSBv1v2tIm+itQc073zjSX8OFL51qQVzRFr7H2UQG33lw2QrvHRXqD Ot7ViKam7v0Ho9wEWiQOOZlHItOOXFphWb2yq3nzrKe45oWoSgkxKb97MVsQ+q2SYjJRBBH4 8qKhphADYxkIP6yut/eaj9ImvRUZZRi0DTc8xfnvHGTjKbJzC2xpFcY0DQbZzuwsIZ8OPJCc LM4S7mT25NE5kUTG/TKQCk922vRdGVMoLA7dIQrgXnRXtyT61sg8PG4wcfOnuWf8577aXP1x 6mzw3/jh3F+oSBHb/GcLC7mvWreJifUL2gEdssGfXhGWBo6zLS3qhgtwjay0Jl+kza1lo+Cv BB2T79D4WGdDuVa4eOrQ02TxqGN7G0Biz5ZLRSFzQSQwLn8fbwARAQABzSFWbGFzdGltaWwg QmFia2EgPHZiYWJrYUBzdXNlLmNvbT7CwZcEEwEKAEECGwMFCwkIBwMFFQoJCAsFFgIDAQAC HgECF4ACGQEWIQSpQNQ0mSwujpkQPVAiT6fnzIKmZAUCWi/zTwUJBbOLuQAKCRAiT6fnzIKm ZIpED/4jRN/6LKZZIT4R2xoou0nJkBGVA3nfb+mUMgi3uwn/zC+o6jjc3ShmP0LQ0cdeuSt/ t2ytstnuARTFVqZT4/IYzZgBsLM8ODFY5vGfPw00tsZMIfFuVPQX3xs0XgLEHw7/1ZCVyJVr mTzYmV3JruwhMdUvIzwoZ/LXjPiEx1MRdUQYHAWwUfsl8lUZeu2QShL3KubR1eH6lUWN2M7t VcokLsnGg4LTajZzZfq2NqCKEQMY3JkAmOu/ooPTrfHCJYMF/5dpi8YF1CkQF/PVbnYbPUuh dRM0m3NzPtn5DdyfFltJ7fobGR039+zoCo6dFF9fPltwcyLlt1gaItfX5yNbOjX3aJSHY2Vc A5T+XAVC2sCwj0lHvgGDz/dTsMM9Ob/6rRJANlJPRWGYk3WVWnbgW8UejCWtn1FkiY/L/4qJ UsqkId8NkkVdVAenCcHQmOGjRQYTpe6Cf4aQ4HGNDeWEm3H8Uq9vmHhXXcPLkxBLRbGDSHyq vUBVaK+dAwAsXn/5PlGxw1cWtur1ep7RDgG3vVQDhIOpAXAg6HULjcbWpBEFaoH720oyGmO5 kV+yHciYO3nPzz/CZJzP5Ki7Q1zqBb/U6gib2at5Ycvews+vTueYO+rOb9sfD8BFTK386LUK uce7E38owtgo/V2GV4LMWqVOy1xtCB6OAUfnGDU2EM7ATQRbGTU1AQgAn0H6UrFiWcovkh6E XVcl+SeqyO6JHOPm+e9Wu0Vw+VIUvXZVUVVQLa1PQDUi6j00ChlcR66g9/V0sPIcSutacPKf dKYOBvzd4rlhL8rfrdEsQw5ApZxrA8kYZVMhFmBRKAa6wos25moTlMKpCWzTH84+WO5+ziCT sTUZASAToz3RdunTD+vQcHj0GqNTPAHK63sfbAB2I0BslZkXkY1RLb/YhuA6E7JyEd2pilZO rIuBGl/5q2qSakgnAVFWFBR/DO27JuAksYnq+aH8vI0xGvwn75KqSk4UzAkDzWSmO4ZHuahK tQgZNsMYV+PGayRBX9b9zbldzopoLBdqHc4njQARAQABwsF8BBgBCgAmFiEEqUDUNJksLo6Z ED1QIk+n58yCpmQFAlsZNTUCGwwFCQPCZwAACgkQIk+n58yCpmQ83g/9Frg1sRMdGPn98zV+ O2eC3h0p5f/oxxQ8MhG5znwHoW4JDG2TuxfcQuz7X7Dd5JWscjlw4VFJ2DD+IrDAGLHwPhCr RyfKalnrbYokvbClM9EuU1oUuh7k+Sg5ECNXEsamW9AiWGCaKWNDdHre3Lf4xl+RJWxghOVW RiUdpLA/a3yDvJNVr6rxkDHQ1P24ZZz/VKDyP+6g8aty2aWEU0YFNjI+rqYZb2OppDx6fdma YnLDcIfDFnkVlDmpznnGCyEqLLyMS3GH52AH13zMT9L9QYgT303+r6QQpKBIxAwn8Jg8dAlV OLhgeHXKr+pOQdFf6iu2sXlUR4MkO/5KWM1K0jFR2ug8Pb3aKOhowVMBT64G0TXhQ/kX4tZ2 ZF0QZLUCHU3Cigvbu4AWWVMNDEOGD/4sn9OoHxm6J04jLUHFUpFKDcjab4NRNWoHLsuLGjve Gdbr2RKO2oJ5qZj81K7os0/5vTAA4qHDP2EETAQcunTn6aPlkUnJ8aw6I1Rwyg7/XsU7gQHF IM/cUMuWWm7OUUPtJeR8loxZiZciU7SMvN1/B9ycPMFs/A6EEzyG+2zKryWry8k7G/pcPrFx O2PkDPy3YmN1RfpIX2HEmnCEFTTCsKgYORangFu/qOcXvM83N+2viXxG4mjLAMiIml1o2lKV cqmP8roqufIAj+Ohhzs= Message-ID: <2705c814-a6b8-0b14-7ea8-790325833d95@suse.cz> Date: Fri, 19 Oct 2018 13:35:11 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.0 MIME-Version: 1.0 In-Reply-To: <20180911004240.4758-1-daniel.m.jordan@oracle.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 9/11/18 2:42 AM, Daniel Jordan wrote: > Hi, > > This is a work-in-progress of what I presented at LSF/MM this year[0] to > greatly reduce contention on lru_lock, allowing it to scale on large systems. > > This is completely different from the lru_lock series posted last January[1]. > > I'm hoping for feedback on the overall design and general direction as I do > more real-world performance testing and polish the code. Is this a workable > approach? > > Thanks, > Daniel > > --- > > Summary: lru_lock can be one of the hottest locks in the kernel on big > systems. It guards too much state, so introduce new SMP-safe list functions to > allow multiple threads to operate on the LRUs at once. The SMP list functions > are provided in a standalone API that can be used in other parts of the kernel. > When lru_lock and zone->lock are both fixed, the kernel can do up to 73.8% more > page faults per second on a 44-core machine. > > --- > > On large systems, lru_lock can become heavily contended in memory-intensive > workloads such as decision support, applications that manage their memory > manually by allocating and freeing pages directly from the kernel, and > workloads with short-lived processes that force many munmap and exit > operations. lru_lock also inhibits scalability in many of the MM paths that > could be parallelized, such as freeing pages during exit/munmap and inode > eviction. Interesting, I would have expected isolate_lru_pages() to be the main culprit, as the comment says: * For pagecache intensive workloads, this function is the hottest * spot in the kernel (apart from copy_*_user functions). It also says "Some of the functions that shrink the lists perform better by taking out a batch of pages and working on them outside the LRU lock." Makes me wonder why isolate_lru_pages() also doesn't cut the list first instead of doing per-page list_move() (and perhaps also prefetch batch of struct pages outside the lock first? Could be doable with some care hopefully). > The problem is that lru_lock is too big of a hammer. It guards all the LRUs in > a pgdat's lruvec, needlessly serializing add-to-front, add-to-tail, and delete > operations that are done on disjoint parts of an LRU, or even completely > different LRUs. > > This RFC series, developed in collaboration with Yossi Lev and Dave Dice, > offers a two-part solution to this problem. > > First, three new list functions are introduced to allow multiple threads to > operate on the same linked list simultaneously under certain conditions, which > are spelled out in more detail in code comments and changelogs. The functions > are smp_list_del, smp_list_splice, and smp_list_add, and do the same things as > their non-SMP-safe counterparts. These primitives may be used elsewhere in the > kernel as the need arises; for example, in the page allocator free lists to > scale zone->lock[2], or in file system LRUs[3]. > > Second, lru_lock is converted from a spinlock to a rwlock. The idea is to > repurpose rwlock as a two-mode lock, where callers take the lock in shared > (i.e. read) mode for code using the SMP list functions, and exclusive (i.e. > write) mode for existing code that expects exclusive access to the LRUs. > Multiple threads are allowed in under the read lock, of course, and they use > the SMP list functions to synchronize amongst themselves. > > The rwlock is scaffolding to facilitate the transition from big-hammer lru_lock > as it exists today to just using the list locking primitives and getting rid of > lru_lock entirely. Such an approach allows incremental conversion of lru_lock > writers until everything uses the SMP list functions and takes the lock in > shared mode, at which point lru_lock can just go away. Yeah I guess that will need more care, e.g. I think smp_list_del() can break any thread doing just a read-only traversal as it can end up with an entry that's been deleted and its next/prev poisoned. It's a bit counterintuitive that "read lock" is now enough for selected modify operations, while read-only traversal would need a write lock. > This RFC series is incomplete. More, and more realistic, performance > numbers are needed; for now, I show only will-it-scale/page_fault1. > Also, there are extensions I'd like to make to the locking scheme to > handle certain lru_lock paths--in particular, those where multiple > threads may delete the same node from an LRU. The SMP list functions > now handle only removal of _adjacent_ nodes from an LRU. Finally, the > diffstat should become more supportive after I remove some of the code > duplication in patch 6 by converting the rest of the per-CPU pagevec > code in mm/swap.c to use the SMP list functions.