Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp6630963ybp; Tue, 15 Oct 2019 18:58:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqywmEt0pU8oWOyKxLgqQwWUwHTfD1iR23pR270H31J3ZTDVO1y1MguKUvnHEF/1J9eqZY90 X-Received: by 2002:a17:906:c443:: with SMTP id ck3mr38239445ejb.0.1571191124698; Tue, 15 Oct 2019 18:58:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571191124; cv=none; d=google.com; s=arc-20160816; b=cnNrzdm/cDdtynIQnFzqq7M7go7y6Chpe4sWnO7zCdX9Tq1axvX0JuCM5LproYOUSa CWmTDw2wxrvFoD8AW9D3jmDJ/aNQng2kmtk4fNQ0JfX/XNnA/yk2+7FoTRGWVA5/7zQc iWjung2EEkKXlCwIwW0kzQvQS26V7qZpBeP4V6DQgvhcdObLWTzwO0rhhChEv7uI5nee OzEL4Sc3kAGApxu9bwNS+SsvT4TXhpoGwZVDtEIwbXHFCIws6KIQMs9co1kRvM2satiQ gmZumMTu1RX+j1IF2SUaWoCi7NjJicgqv0bQyqRyi3GvsR1cgCqd678HOQMqhIuyYQLt XDHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=LtJWezitvc2yn53W2dSN9RivvxS8qaVHJyMtgqMvOZY=; b=iOtMd4tV5ugYkUgfw3/b5ywMnoUf1epa7IuSGnpPvysff62WJlo0NjqZaKJ+HD5eCv kCC28N/YdsFwfxmPP83dlfuM9iUaPnx5Ud2xIkl8ZVTc3Y4e5f7RRbBckqtaGiV3UpxP 2gjtuJTSr1FBmpLGZoGV1R+bSBo1tocCbBX/43WMbwvf3OZWOWD/r82f9swkU9XMqz/A mAwwpQ9lVUXNCRChlFmHU8jGkNMWBaH4y9HzAgDmCxN+W6PqMW0VKHXTmMQvUWOOufMZ nEcvn4i5o+MGUEzMl6MaSDd41yZEcZMQbB9wYDgV6kfDEAwOhrOFXmE/UFKO5JV/HkDV gUGg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c38si17278536eda.46.2019.10.15.18.58.22; Tue, 15 Oct 2019 18:58:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728482AbfJOTUM (ORCPT + 99 others); Tue, 15 Oct 2019 15:20:12 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:45679 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389418AbfJOTSq (ORCPT ); Tue, 15 Oct 2019 15:18:46 -0400 Received: from localhost ([127.0.0.1] helo=localhost.localdomain) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1iKSLB-00067i-FL; Tue, 15 Oct 2019 21:18:41 +0200 From: Sebastian Andrzej Siewior To: linux-kernel@vger.kernel.org Cc: tglx@linutronix.de, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , linux-mm@kvack.org, Sebastian Andrzej Siewior Subject: [PATCH 25/34] mm: Use CONFIG_PREEMPTION Date: Tue, 15 Oct 2019 21:18:12 +0200 Message-Id: <20191015191821.11479-26-bigeasy@linutronix.de> In-Reply-To: <20191015191821.11479-1-bigeasy@linutronix.de> References: <20191015191821.11479-1-bigeasy@linutronix.de> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Thomas Gleixner CONFIG_PREEMPTION is selected by CONFIG_PREEMPT and by CONFIG_PREEMPT_RT. Both PREEMPT and PREEMPT_RT require the same functionality which today depends on CONFIG_PREEMPT. Switch the pte_unmap_same() and SLUB code over to use CONFIG_PREEMPTION. Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: linux-mm@kvack.org Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- mm/memory.c | 2 +- mm/slub.c | 12 ++++++------ 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index b1ca51a079f27..fd2cede4a84f0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2133,7 +2133,7 @@ static inline int pte_unmap_same(struct mm_struct *mm= , pmd_t *pmd, pte_t *page_table, pte_t orig_pte) { int same =3D 1; -#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT) +#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPTION) if (sizeof(pte_t) > sizeof(unsigned long)) { spinlock_t *ptl =3D pte_lockptr(mm, pmd); spin_lock(ptl); diff --git a/mm/slub.c b/mm/slub.c index 3d63ae320d31b..23fa669934829 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1984,7 +1984,7 @@ static void *get_partial(struct kmem_cache *s, gfp_t = flags, int node, return get_any_partial(s, flags, c); } =20 -#ifdef CONFIG_PREEMPT +#ifdef CONFIG_PREEMPTION /* * Calculate the next globally unique transaction for disambiguiation * during cmpxchg. The transactions start with the cpu number and are then @@ -2029,7 +2029,7 @@ static inline void note_cmpxchg_failure(const char *n, =20 pr_info("%s %s: cmpxchg redo ", n, s->name); =20 -#ifdef CONFIG_PREEMPT +#ifdef CONFIG_PREEMPTION if (tid_to_cpu(tid) !=3D tid_to_cpu(actual_tid)) pr_warn("due to cpu change %d -> %d\n", tid_to_cpu(tid), tid_to_cpu(actual_tid)); @@ -2657,7 +2657,7 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t= gfpflags, int node, unsigned long flags; =20 local_irq_save(flags); -#ifdef CONFIG_PREEMPT +#ifdef CONFIG_PREEMPTION /* * We may have been preempted and rescheduled on a different * cpu before disabling interrupts. Need to reload cpu area @@ -2700,13 +2700,13 @@ static __always_inline void *slab_alloc_node(struct= kmem_cache *s, * as we end up on the original cpu again when doing the cmpxchg. * * We should guarantee that tid and kmem_cache are retrieved on - * the same cpu. It could be different if CONFIG_PREEMPT so we need + * the same cpu. It could be different if CONFIG_PREEMPTION so we need * to check if it is matched or not. */ do { tid =3D this_cpu_read(s->cpu_slab->tid); c =3D raw_cpu_ptr(s->cpu_slab); - } while (IS_ENABLED(CONFIG_PREEMPT) && + } while (IS_ENABLED(CONFIG_PREEMPTION) && unlikely(tid !=3D READ_ONCE(c->tid))); =20 /* @@ -2984,7 +2984,7 @@ static __always_inline void do_slab_free(struct kmem_= cache *s, do { tid =3D this_cpu_read(s->cpu_slab->tid); c =3D raw_cpu_ptr(s->cpu_slab); - } while (IS_ENABLED(CONFIG_PREEMPT) && + } while (IS_ENABLED(CONFIG_PREEMPTION) && unlikely(tid !=3D READ_ONCE(c->tid))); =20 /* Same with comment on barrier() in slab_alloc_node() */ --=20 2.23.0