Received: by 10.213.65.68 with SMTP id h4csp894086imn; Wed, 4 Apr 2018 09:01:39 -0700 (PDT) X-Google-Smtp-Source: AIpwx49Qkcw4La+2rlvhMoBSBIskjFqw9mMbiX3WIrs4gXVNqaDigJCG4CCr140iFnvWqnY2oa9P X-Received: by 2002:a17:902:4545:: with SMTP id m63-v6mr19207715pld.149.1522857699092; Wed, 04 Apr 2018 09:01:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522857699; cv=none; d=google.com; s=arc-20160816; b=I1QuU1VN02dml20TFBxd+u29Kbb2xLDfjImxf5o8tDExKohZ1g/gO3dKACUPleqLD6 4aXNJGqd7tiFN9TQt7N8AKnvp+GpVVA+LQVcoF17yQ2yuzwRtP4eXQlkNF9ZazmuEFR+ EU3AuCriCJ/JxzRTO+1nnnVmiX3ZbUwUoWP2SgBKpkgbmX34wOkWUQmWKC3jLhCEUaeA T9WUL3dYdNYzzmtcgYV26xAV4wnzpYywKlTGf1lWII63noBK4BZUmzGRIB6D4a7I+PfY W3+MBpMcn7E4UJ9FMg1O+mavKS7FLn2uof0huEGbj03YTKuaZY20lasM7HJq6y8cCmRU 2QLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dmarc-filter:arc-authentication-results; bh=NNvSwo4hfHrwy0Sj0awPqlPKYYmmfbiTjDd4oZNobVw=; b=o1F4zyakNWPJIijC+ZT6Tz88iv+sJHSZYj0C4pFMxpIQ4SyEx2B88jUGP6ATCU0+aE Rd6hRukiR6be2hqQJNQqazQlcpAHTbCMSPLpyUZlGoHbWvzQBQGnSUjEPnZfKFN2ePO3 5nxe8CIKq7ZsZBP9Agu63yFeQqH/2z9pXVdEg3zwn0ds1Oyy68a/yfNNv8dTiZfW7fvu 30YgFA2g2LPOj3VmlUwNB3cXU0vU08VeoslmY2fw9KUhpcjStQc0sMhIT17sW1FvRd8M 6fvJBlfWs7LmQW9fbzlWvfZMc1VqWXQS73WcQvzEaFWsLrVRbb3lUHdyQNQEMRq6wDxQ 0ANg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d30-v6si3387103pld.92.2018.04.04.09.01.23; Wed, 04 Apr 2018 09:01:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752383AbeDDQAG (ORCPT + 99 others); Wed, 4 Apr 2018 12:00:06 -0400 Received: from mail.kernel.org ([198.145.29.99]:51610 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751990AbeDDQAF (ORCPT ); Wed, 4 Apr 2018 12:00:05 -0400 Received: from gandalf.local.home (cpe-66-24-56-78.stny.res.rr.com [66.24.56.78]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1CF3521707; Wed, 4 Apr 2018 16:00:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1CF3521707 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=goodmis.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=rostedt@goodmis.org Date: Wed, 4 Apr 2018 12:00:02 -0400 From: Steven Rostedt To: LKML Cc: Michal Hocko , Zhaoyang Huang , Ingo Molnar , kernel-patch-test@lists.linaro.org, Andrew Morton , Joel Fernandes , linux-mm@kvack.org, Vlastimil Babka Subject: Re: [PATCH] ring-buffer: Add set/clear_current_oom_origin() during allocations Message-ID: <20180404120002.6561a5bc@gandalf.local.home> In-Reply-To: <20180404115310.6c69e7b9@gandalf.local.home> References: <20180404115310.6c69e7b9@gandalf.local.home> X-Mailer: Claws Mail 3.16.0 (GTK+ 2.24.31; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 4 Apr 2018 11:53:10 -0400 Steven Rostedt wrote: > @@ -1162,35 +1163,60 @@ static int rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer) > static int __rb_allocate_pages(long nr_pages, struct list_head *pages, int cpu) > { > struct buffer_page *bpage, *tmp; > + bool user_thread = current->mm != NULL; > + gfp_t mflags; > long i; > > - /* Check if the available memory is there first */ > + /* > + * Check if the available memory is there first. > + * Note, si_mem_available() only gives us a rough estimate of available > + * memory. It may not be accurate. But we don't care, we just want > + * to prevent doing any allocation when it is obvious that it is > + * not going to succeed. > + */ In case you are wondering how I tested this, I simply added: #if 0 > i = si_mem_available(); > if (i < nr_pages) > return -ENOMEM; #endif for the tests. Note, without this, I tried to allocate all memory (bisecting it with allocations that failed and allocations that succeeded), and couldn't trigger an OOM :-/ Of course, this was on x86_64, where I'm sure I could allocate any memory, and probably would have had more luck with a 32bit kernel using higmem. -- Steve > > + /* > + * __GFP_RETRY_MAYFAIL flag makes sure that the allocation fails > + * gracefully without invoking oom-killer and the system is not > + * destabilized. > + */ > + mflags = GFP_KERNEL | __GFP_RETRY_MAYFAIL; > + > + /* > + * If a user thread allocates too much, and si_mem_available() > + * reports there's enough memory, even though there is not. > + * Make sure the OOM killer kills this thread. This can happen > + * even with RETRY_MAYFAIL because another task may be doing > + * an allocation after this task has taken all memory. > + * This is the task the OOM killer needs to take out during this > + * loop, even if it was triggered by an allocation somewhere else. > + */ > + if (user_thread) > + set_current_oom_origin(); > for (i = 0; i < nr_pages; i++) { > struct page *page; > - /* > - * __GFP_RETRY_MAYFAIL flag makes sure that the allocation fails > - * gracefully without invoking oom-killer and the system is not > - * destabilized. > - */ > + > bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()), > - GFP_KERNEL | __GFP_RETRY_MAYFAIL, > - cpu_to_node(cpu)); > + mflags, cpu_to_node(cpu)); > if (!bpage) > goto free_pages; > > list_add(&bpage->list, pages); > > - page = alloc_pages_node(cpu_to_node(cpu), > - GFP_KERNEL | __GFP_RETRY_MAYFAIL, 0); > + page = alloc_pages_node(cpu_to_node(cpu), mflags, 0); > if (!page) > goto free_pages; > bpage->page = page_address(page); > rb_init_page(bpage->page); > + > + if (user_thread && fatal_signal_pending(current)) > + goto free_pages; > } > + if (user_thread) > + clear_current_oom_origin(); > > return 0; > > @@ -1199,6 +1225,8 @@ static int __rb_allocate_pages(long nr_pages, struct list_head *pages, int cpu) > list_del_init(&bpage->list); > free_buffer_page(bpage); > } > + if (user_thread) > + clear_current_oom_origin(); > > return -ENOMEM; > }