Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp620497imu; Fri, 11 Jan 2019 06:19:44 -0800 (PST) X-Google-Smtp-Source: ALg8bN5c9yMUHAB1nBSjYtmTXKBz5zfU3RS2O9QKP3STRVe2rHEIQUFgFYurvI5aWK9r6Fg3BGLj X-Received: by 2002:a62:5dd1:: with SMTP id n78mr14497354pfj.58.1547216384852; Fri, 11 Jan 2019 06:19:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547216384; cv=none; d=google.com; s=arc-20160816; b=SnImqGBfXQr7AaEmM0vo+35BQ/7rAQoDH0IE4LuQQFet8mmJbZHILAU02l0BOjxKN8 5QzKeV+9UBoeHqFUKJ62Bt8cgmCva84QN8wYR3uqmmdzPY1Obe+lVXFML1GBsap4nEad 9betu27bYkcp59Q8RyJPqwc3Fp94mAcoETt0rtxa1tLDKBR8mnUHIunwE/rhlmgjluVg aRIWqz86rM5AWQ1JOfYqwjZwPw3OfO53wTlEMSz7Cc8627hyWEI5rmFSywK3033F2mjO A/HCcZTXx0oliuQv8RTLk4+sr27+6C5bLU+au/R+FKev6aViJzEu8jb8nutXUiW590lr Bzyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=92eZfTBYP3VcZ9zJhrOXvcg7RBPuQj0PXD1NxwrbBwE=; b=WLfkMxjSYv4yWIgsWpf05PenOvepPiiMU1ZApuIAH7hIn1fiu3+wo/DUBWQTz3H7BS RcnAZ73ej+Aq+RQoQ/+9+Of5DIAFMOdp15piegmGEeW1eIrCQbURFO0hsTbUVhq/tLN4 kiuN2orHdMfEuWoHCIqRwX94510X6Qc4JND79d+NCVDowNyh439+r0ReG7JTzeXA67Mk 74jIltQeI4KteQUtHwKVu5MT5Ocnt39dvlmpcFh+MX/Bc2sz63UKI8gEWiLKsivzSYrf FSR9qVfcz9mLqTWgBFIvG63SbJYxBs4TRIid1IFWasXzW0udOjEyRCROn+q3JDQCLJwL jRpA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g8si12654185pgb.128.2019.01.11.06.19.28; Fri, 11 Jan 2019 06:19:44 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732230AbfAKMRw (ORCPT + 99 others); Fri, 11 Jan 2019 07:17:52 -0500 Received: from relay.sw.ru ([185.231.240.75]:56604 "EHLO relay.sw.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731471AbfAKMRw (ORCPT ); Fri, 11 Jan 2019 07:17:52 -0500 Received: from [172.16.25.169] by relay.sw.ru with esmtp (Exim 4.91) (envelope-from ) id 1ghvku-0004Sh-61; Fri, 11 Jan 2019 15:17:44 +0300 Subject: Re: [PATCH RFC 0/3] mm: Reduce IO by improving algorithm of memcg pagecache pages eviction To: Shakeel Butt Cc: Andrew Morton , Johannes Weiner , Josef Bacik , Jan Kara , Hugh Dickins , "Darrick J. Wong" , Michal Hocko , Andrey Ryabinin , Roman Gushchin , Mel Gorman , Linux MM , LKML References: <154703479840.32690.6504699919905946726.stgit@localhost.localdomain> From: Kirill Tkhai Message-ID: <5f105f0b-ef4e-2f52-4a36-94c05c8140ae@virtuozzo.com> Date: Fri, 11 Jan 2019 15:17:43 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.3.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10.01.2019 22:19, Shakeel Butt wrote: > On Thu, Jan 10, 2019 at 1:46 AM Kirill Tkhai wrote: >> >> Hi, Shakeel, >> >> On 09.01.2019 20:37, Shakeel Butt wrote: >>> Hi Kirill, >>> >>> On Wed, Jan 9, 2019 at 4:20 AM Kirill Tkhai wrote: >>>> >>>> On nodes without memory overcommit, it's common a situation, >>>> when memcg exceeds its limit and pages from pagecache are >>>> shrinked on reclaim, while node has a lot of free memory. >>>> Further access to the pages requires real device IO, while >>>> IO causes time delays, worse powerusage, worse throughput >>>> for other users of the device, etc. >>>> >>>> Cleancache is not a good solution for this problem, since >>>> it implies copying of page on every cleancache_put_page() >>>> and cleancache_get_page(). Also, it requires introduction >>>> of internal per-cleancache_ops data structures to manage >>>> cached pages and their inodes relationships, which again >>>> introduces overhead. >>>> >>>> This patchset introduces another solution. It introduces >>>> a new scheme for evicting memcg pages: >>>> >>>> 1)__remove_mapping() uncharges unmapped page memcg >>>> and leaves page in pagecache on memcg reclaim; >>>> >>>> 2)putback_lru_page() places page into root_mem_cgroup >>>> list, since its memcg is NULL. Page may be evicted >>>> on global reclaim (and this will be easily, as >>>> page is not mapped, so shrinker will shrink it >>>> with 100% probability of success); >>>> >>>> 3)pagecache_get_page() charges page into memcg of >>>> a task, which takes it first. >>>> >>> >>> From what I understand from the proposal, on memcg reclaim, the file >>> pages are uncharged but kept in the memory and if they are accessed >>> again (either through mmap or syscall), they will be charged again but >>> to the requesting memcg. Also it is assumed that the global reclaim of >>> such uncharged file pages is very fast and deterministic. Is that >>> right? >> >> Yes, this was my assumption. But Michal, Josef and Johannes pointed a diving >> into reclaim in general is not fast. So, maybe we need some more creativity >> here to minimize the effect of this diving.. >> > > I kind of disagree that this patchset is breaking the API semantics as > the charged memory of a memcg will never go over max/limit_in_bytes. > However the concern I have is the performance isolation. The > performance of a pagecache heavy job with a private mount can be > impacted by other jobs running on the system. This might be fine for > some customers but not for Google. One use-case I can tell is the > auto-tuner which adjusts the limits of the jobs based on their > performance and history. So, to make the auto-tuning deterministic we > have to disable the proposed optimization for the jobs with > auto-tuning enabled. Beside that there are internal non-auto-tuned > customers who prefer deterministic performance. > > Also I am a bit skeptical that the allocation from the pool of such > (clean unmapped uncharged) file pages can be made as efficient as > fastpath of page allocator. Even if these pages are stored in a > separate list instead of root's LRU, on allocation, the pages need to > be unlinked from their mapping and has to be cleared. I'd said, we move this unlinking from one place to another, so the unlinking itself does not introduce more pressure on node. The difference to current behavior is 1)we need to get through all shrinker functions once again, 2)this forces caller to do global reclaim and iterate all memcgs. But it looks like these two things may be solved in some way. > BTW does this optimization have any impact on workingset mechanism? I won't say exactly, but since pages are unmapped, we should not call any workingset handlers, so I don't know what we may break there. Kirill