Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp2952642pxa; Tue, 18 Aug 2020 02:27:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxQ0VTEXDCoiiFGGI6XjW7lICU23oYHFLjDhDc+aUc9AMADK0DEH1u3JlfaPv/qZu6fPXdm X-Received: by 2002:a50:8f44:: with SMTP id 62mr19579392edy.3.1597742836429; Tue, 18 Aug 2020 02:27:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597742836; cv=none; d=google.com; s=arc-20160816; b=mVs3cuasTgnQtX4btVGA1BPUYzSP9XjYGrWhlNYBaWbHY9EsE6b3+hwt4EpMfs1Fu4 BYVwqCHXAkB3cz/V3CK2+NkUFIqAN/maYL5//RPHryS+k8zIlm4nbff1M4ufEmtLxP/7 c1JDFfXMkYo7pa9jhfJ8GL3rYIVdbbgSFHt7HUJfO9aZO5g8C6+6E56JDukGSBeXYhIa VeuRhs59vZIQJCciRNtxYh1VrpedsP7IsX6zQWBY9BudsqoVuo/NpW+ws04Pol3ecbB4 ca23lS+4Bv/purWB2y6DFqs6Xuk+0LDQ6f3Q5KbBDAHRjKW3dXEi7HykhX0ZEuWxaRGK /oHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=UuqrnJMUdvNxw6o3O3KoqqgFalWyf1NpbInFGxDAj8c=; b=IFQN/fz0F0hpH8NSMKeXxmU7GtoaHkDVsU/dQnKJbLdKfe9fggtzRigjozb2SWLjSH G6JK5wOvyDo1i9xF7kDaVoFo8T1YrPJWTjBN1pPttohwESOGLmESfZ0p22hrRD9bJkdy a3XcZgM+h3/UPONxfz9Kxp4dE8BH2aj95f/vaBwcQ4f2PAdt+BvgdmoZDgAeFJLGZ3Wq eihhVQZk7iL4IuNF5V8zkaAxIXMKfL2cWjNsO5f5rWLFO3oqxvrtkgXThHWFy9Wpcys6 MZnXQXZ7oJXPOaNYsWgG0OB8d6Js7rM+lHtMJrhFZFg5omv8BrKyc4w7h+f7OViP5dXl 6EIg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w25si13138717ejy.124.2020.08.18.02.26.52; Tue, 18 Aug 2020 02:27:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726424AbgHRJ0U (ORCPT + 99 others); Tue, 18 Aug 2020 05:26:20 -0400 Received: from mx2.suse.de ([195.135.220.15]:54426 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726145AbgHRJ0U (ORCPT ); Tue, 18 Aug 2020 05:26:20 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 6DBFFAE59; Tue, 18 Aug 2020 09:26:44 +0000 (UTC) Date: Tue, 18 Aug 2020 11:26:17 +0200 From: Michal Hocko To: peterz@infradead.org Cc: Waiman Long , Andrew Morton , Johannes Weiner , Vladimir Davydov , Jonathan Corbet , Alexey Dobriyan , Ingo Molnar , Juri Lelli , Vincent Guittot , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org Subject: Re: [RFC PATCH 0/8] memcg: Enable fine-grained per process memory control Message-ID: <20200818092617.GN28270@dhcp22.suse.cz> References: <20200817140831.30260-1-longman@redhat.com> <20200818091453.GL2674@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200818091453.GL2674@hirez.programming.kicks-ass.net> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 18-08-20 11:14:53, Peter Zijlstra wrote: > On Mon, Aug 17, 2020 at 10:08:23AM -0400, Waiman Long wrote: > > Memory controller can be used to control and limit the amount of > > physical memory used by a task. When a limit is set in "memory.high" in > > a v2 non-root memory cgroup, the memory controller will try to reclaim > > memory if the limit has been exceeded. Normally, that will be enough > > to keep the physical memory consumption of tasks in the memory cgroup > > to be around or below the "memory.high" limit. > > > > Sometimes, memory reclaim may not be able to recover memory in a rate > > that can catch up to the physical memory allocation rate. In this case, > > the physical memory consumption will keep on increasing. > > Then slow down the allocator? That's what we do for dirty pages too, we > slow down the dirtier when we run against the limits. This is what we actually do. Have a look at mem_cgroup_handle_over_high. -- Michal Hocko SUSE Labs