Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp3721999ybz; Mon, 4 May 2020 08:30:02 -0700 (PDT) X-Google-Smtp-Source: APiQypLHOAxhtaJ4tFMdGAEmRH4bX2z9gHVjjP1HL8mQUMXqphWTeABYvoYQu9aoZmSsbkZrtP6e X-Received: by 2002:aa7:c714:: with SMTP id i20mr14898865edq.230.1588606201913; Mon, 04 May 2020 08:30:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588606201; cv=none; d=google.com; s=arc-20160816; b=PEVVv/u0jPpQT7z/GgaGdV19gx1ynizSQ65/9ODMM1vsdZnPQ7NIBRl5ENg2QJo4pX 6go7OVPzZtQj5P99xTEh/JfqrNZ0xbF6SjRMUVWsoMutSRh7Bjy+hxEwy8NUeUHr2otZ q3jRTzSeKdbXBXbnuIhr7kg6X98cirhfRLIVzwJdwWxUbtvmTnNIF3td6NOsel84ohdL KAC8Pot8ZnLJedCc3PAxqJ/24pTTHbg5z+qjwjt5F2CmrB4ASJRUaEDpLkCViktxv5+k WC97/I5C4sDLkKwBE7ycihh41tEqiUG3Eo/eGTa42crYH7XolaNPuRTAsAkTzsefV45A OnFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=cKw6R+O8BWkbGVJWQoJqo8UNzv90IZnV49VdIZzAfq0=; b=bkm33EVh4zeH8xmtyNNdEz8Sguf5tkRr4iKTLdtGTyfAJu0oUug56HIjTrnOchkTsB tXIGezS6i8YiLO+a8DtXwTE4CZMqurYnBow2A2ENPVYFPZTpgR6hs7E5hbnuxx9nYlVe VRA0eLXmJ8gIpT+O4c7grWYsnGTd3iAIai+qMQz0ZQhVA79jWHYUMu5tbf/uQIpoE2QU 5UrKPs03M9oFrpor16WP3dVX7+VVX23aJSj0Ktq7u/d/KFYNC+IgLSiTtqwbyX4DZW7x JJ0jOX3IPcpf188wOTfVsWfOI6cK7waJ0H1DSaTTXtKMlalRlW6boS2/VFIpu6rnChkb erVg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c10si7147741eds.107.2020.05.04.08.29.37; Mon, 04 May 2020 08:30:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729266AbgEDPMl (ORCPT + 99 others); Mon, 4 May 2020 11:12:41 -0400 Received: from mx2.suse.de ([195.135.220.15]:54782 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726509AbgEDPMl (ORCPT ); Mon, 4 May 2020 11:12:41 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 14AF5ABBD; Mon, 4 May 2020 15:12:41 +0000 (UTC) Date: Mon, 4 May 2020 17:12:36 +0200 From: Joerg Roedel To: Steven Rostedt Cc: Mathieu Desnoyers , linux-kernel , Ingo Molnar , Thomas Gleixner , Peter Zijlstra , Borislav Petkov , Andrew Morton , Shile Zhang , Andy Lutomirski , "Rafael J. Wysocki" , Dave Hansen , Tzvetomir Stoyanov Subject: [PATCH] percpu: Sync vmalloc mappings in pcpu_alloc() and free_percpu() Message-ID: <20200504151236.GI8135@suse.de> References: <20200429054857.66e8e333@oasis.local.home> <20200429105941.GQ30814@suse.de> <20200429082854.6e1796b5@oasis.local.home> <20200429100731.201312a9@gandalf.local.home> <20200430141120.GA8135@suse.de> <20200430121136.6d7aeb22@gandalf.local.home> <20200430191434.GC8135@suse.de> <20200430211308.74a994dc@oasis.local.home> <1902703609.78863.1588300015661.JavaMail.zimbra@efficios.com> <20200430223919.50861011@gandalf.local.home> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200430223919.50861011@gandalf.local.home> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 30, 2020 at 10:39:19PM -0400, Steven Rostedt wrote: > What's so damn special about alloc_percpu()? It's definitely not a fast > path. And it's not used often. Okay, I fixed it in the percpu code. It is definitly not a nice solution, but having to call vmalloc_sync_mappings/unmappings() is not a nice solution at any place in the code. Here is the patch which fixes this issue for me. I am also not sure what to put in the Fixes tag, as it is related to tracing code accessing per-cpu data from the page-fault handler, not sure when this got introduced. Maybe someone else can provide a meaningful Fixes- or stable tag. I also have an idea in mind how to make this all more robust and get rid of the vmalloc_sync_mappings/unmappings() interface, will show more when I know it works the way I think it does. Regards, Joerg From c616a9a09499f9c9d682775767d3de7db81fb2ed Mon Sep 17 00:00:00 2001 From: Joerg Roedel Date: Mon, 4 May 2020 17:11:41 +0200 Subject: [PATCH] percpu: Sync vmalloc mappings in pcpu_alloc() and free_percpu() Sync the vmalloc mappings for all page-tables in the system when allocating and freeing per-cpu memory. This is necessary for architectures which use page-faults on vmalloc areas. The page-fault handlers accesses per-cpu data when tracing is enabled, and fauling again in the page-fault handler on a vmalloc'ed per-cpu area will result in a recursive fault. Signed-off-by: Joerg Roedel --- mm/percpu.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/mm/percpu.c b/mm/percpu.c index d7e3bc649f4e..6ab035bc6977 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1710,6 +1710,20 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved, trace_percpu_alloc_percpu(reserved, is_atomic, size, align, chunk->base_addr, off, ptr); + /* + * The per-cpu buffers might be allocated in the vmalloc area of the + * address space. When the architecture allows faulting on the vmalloc + * area and the memory allocated here is accessed in the page-fault + * handler, the vmalloc area fault may be recursive and could never be + * resolved. + * This happens for example in the tracing code which allocates per-cpu + * and accesses them when tracing page-faults. + * To prevent this, make sure the per-cpu buffers allocated here are + * mapped in all PGDs so that the page-fault handler will never fault + * again on them. + */ + vmalloc_sync_mappings(); + return ptr; fail_unlock: @@ -1958,6 +1972,12 @@ void free_percpu(void __percpu *ptr) trace_percpu_free_percpu(chunk->base_addr, off, ptr); + /* + * See comment at the vmalloc_sync_mappings() call in pcpu_alloc() for + * why this is necessary. + */ + vmalloc_sync_unmappings(); + spin_unlock_irqrestore(&pcpu_lock, flags); if (need_balance) -- 2.12.3