Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp5528557pxv; Wed, 7 Jul 2021 06:01:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyxQhDvsYBhmVwbVend2ni8j9tf3qso3iocmAC3Xh6+dQ0Z6A+AW5u9svTlRIFt8cCWmVwT X-Received: by 2002:a17:907:60d6:: with SMTP id hv22mr20928751ejc.80.1625662918262; Wed, 07 Jul 2021 06:01:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625662918; cv=none; d=google.com; s=arc-20160816; b=aicwOZBF9cg/dNBmfX7DCKSjiWY4l0irrKfSeMYovw97UMHImLrzhBPS1xeyOo2C0v lbRdLlM6xFhvR9IQmUvelf4fGzJiOS+9GnNoY1fwt4E6AKXh7JghxaMlK89eAxbgEGnC zefLfr+lt91F2eMsVykeGLYmnO2r64acWeop5mflZfEfBhKMiRpX8X58DSNCUFKeujdx w3F5US7tWvIrldcgCk7tHMviWWZed61Zm+/GdBSnUnGqT/6a4k+qg5yK/N5DU3zZn+Ms CnmahdKt3UCTtCJa3YzKD+V9D3WmbJDrDj0bd3AcSevc3gDVhWClgFiFrd4CdCZ/6XuZ LSBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-disposition:mime-version:message-id :subject:cc:to:from:date; bh=2kmKBpXwA1RSdHcldobA4OeqQNfsQiFqigDRPNTPVWI=; b=U2KcE0A5w/Nyl9kimTO7H2QASmOfhKl7WJKKoMRnwSetXH7bH1sKWwRx1s/5u9FU8d /TY298IwozNY9K8IVfE0HnxvlbaEs+3adytsGGxnxPAP6KivpDJlOrlymgX15kaTLDUR 77VqN7ws4bIFI1s2NWvodk4rIxUxjWc7lcWYsMRi2T5HnXWvFZuLhbMdBugbM7uBnZh/ S++SY5qU/jvjX8KticSDbCMANxKNSftWOhjkDlRBUHK7xalhH5M5kYcbHE1I82QunZfx 9/swr4MQ8gklBhWmdJ9D6CD0BGEhEmXarCNd7ZI6QpTMqZW0nELBnM5yHzb++QO8lXD5 7jqQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id go34si14574628ejc.272.2021.07.07.06.01.33; Wed, 07 Jul 2021 06:01:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231590AbhGGNDV (ORCPT + 99 others); Wed, 7 Jul 2021 09:03:21 -0400 Received: from mail-io1-f44.google.com ([209.85.166.44]:34306 "EHLO mail-io1-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231328AbhGGNDV (ORCPT ); Wed, 7 Jul 2021 09:03:21 -0400 Received: by mail-io1-f44.google.com with SMTP id g22so3284056iom.1 for ; Wed, 07 Jul 2021 06:00:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition; bh=2kmKBpXwA1RSdHcldobA4OeqQNfsQiFqigDRPNTPVWI=; b=M1HVxG5p2BG1VmFogcG0XWXdlJZCF/ukx5hAIzXmNXdpYXVr2NbXtx2+K6aU2Yelbp 3omAxkQHd90dZVKTco6NXNe+JL3CLeSqztYFbs6LMjBN4dk60zGunFNA7zuT6zj9NgYR PudmzzhHrll6OKbV6acH3Uku1+gTrah48XwBpxJKe0AcPG5svDtdO1Expj5xljkGHoFb R/otxLonQd4i+wS1j8HFCheogPAoCL1tnjRSITkPufqUwKB10THR/6nJfyuXZKY4gjRD oxnvqkZSh9gHIJqM/TSFHqYmhBJDQfqQ/gylR9V3d/fesb30hSMNwJb3cbPr5frR00QI CvNw== X-Gm-Message-State: AOAM533Ul8KQOB6zB9jBoc8dOaiweHsezTFB71DMMfiOg7cAGwZhSsLK rHcL38HPPZJuEweYLkX+qB39TKvfbQA= X-Received: by 2002:a5e:9703:: with SMTP id w3mr19707384ioj.118.1625662840936; Wed, 07 Jul 2021 06:00:40 -0700 (PDT) Received: from google.com (243.199.238.35.bc.googleusercontent.com. [35.238.199.243]) by smtp.gmail.com with ESMTPSA id a1sm9653175ilj.54.2021.07.07.06.00.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jul 2021 06:00:40 -0700 (PDT) Date: Wed, 7 Jul 2021 13:00:39 +0000 From: Dennis Zhou To: Linus Torvalds Cc: Tejun Heo , Christoph Lameter , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [GIT PULL] percpu fixes for v5.14-rc1 Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Linus, This is just a single change to fix percpu depopulation. The code relied on depopulation code written specifically for the free path and relied on vmalloc to do the tlb flush lazily. As we're modifying the backing pages during the lifetime of a chunk, we need to also flush the tlb accordingly. Guenter Roeck reported this issue in [1] on mips. I believe we just happen to be lucky given the much larger chunk sizes on x86 and consequently less churning of this memory. [1] https://lore.kernel.org/lkml/20210702191140.GA3166599@roeck-us.net/ Thanks, Dennis The following changes since commit d6b63b5b7d7f363c6a54421533791e9849adf2e0: Merge tag 'sound-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound (2021-07-02 15:25:23 -0700) are available in the Git repository at: git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu.git for-5.14-fixes for you to fetch changes up to 93274f1dd6b0a615b299beddf99871fe81f91275: percpu: flush tlb in pcpu_reclaim_populated() (2021-07-04 18:30:17 +0000) ---------------------------------------------------------------- Dennis Zhou (1): percpu: flush tlb in pcpu_reclaim_populated() mm/percpu-km.c | 6 ++++++ mm/percpu-vm.c | 5 +++-- mm/percpu.c | 32 ++++++++++++++++++++++++++------ 3 files changed, 35 insertions(+), 8 deletions(-) diff --git a/mm/percpu-km.c b/mm/percpu-km.c index c9d529dc7651..fe31aa19db81 100644 --- a/mm/percpu-km.c +++ b/mm/percpu-km.c @@ -32,6 +32,12 @@ #include +static void pcpu_post_unmap_tlb_flush(struct pcpu_chunk *chunk, + int page_start, int page_end) +{ + /* nothing */ +} + static int pcpu_populate_chunk(struct pcpu_chunk *chunk, int page_start, int page_end, gfp_t gfp) { diff --git a/mm/percpu-vm.c b/mm/percpu-vm.c index ee5d89fcd66f..2054c9213c43 100644 --- a/mm/percpu-vm.c +++ b/mm/percpu-vm.c @@ -303,6 +303,9 @@ static int pcpu_populate_chunk(struct pcpu_chunk *chunk, * For each cpu, depopulate and unmap pages [@page_start,@page_end) * from @chunk. * + * Caller is required to call pcpu_post_unmap_tlb_flush() if not returning the + * region back to vmalloc() which will lazily flush the tlb. + * * CONTEXT: * pcpu_alloc_mutex. */ @@ -324,8 +327,6 @@ static void pcpu_depopulate_chunk(struct pcpu_chunk *chunk, pcpu_unmap_pages(chunk, pages, page_start, page_end); - /* no need to flush tlb, vmalloc will handle it lazily */ - pcpu_free_pages(chunk, pages, page_start, page_end); } diff --git a/mm/percpu.c b/mm/percpu.c index b4cebeca4c0c..7f2e0151c4e2 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1572,6 +1572,7 @@ static void pcpu_chunk_depopulated(struct pcpu_chunk *chunk, * * pcpu_populate_chunk - populate the specified range of a chunk * pcpu_depopulate_chunk - depopulate the specified range of a chunk + * pcpu_post_unmap_tlb_flush - flush tlb for the specified range of a chunk * pcpu_create_chunk - create a new chunk * pcpu_destroy_chunk - destroy a chunk, always preceded by full depop * pcpu_addr_to_page - translate address to physical address @@ -1581,6 +1582,8 @@ static int pcpu_populate_chunk(struct pcpu_chunk *chunk, int page_start, int page_end, gfp_t gfp); static void pcpu_depopulate_chunk(struct pcpu_chunk *chunk, int page_start, int page_end); +static void pcpu_post_unmap_tlb_flush(struct pcpu_chunk *chunk, + int page_start, int page_end); static struct pcpu_chunk *pcpu_create_chunk(gfp_t gfp); static void pcpu_destroy_chunk(struct pcpu_chunk *chunk); static struct page *pcpu_addr_to_page(void *addr); @@ -2137,11 +2140,12 @@ static void pcpu_reclaim_populated(void) { struct pcpu_chunk *chunk; struct pcpu_block_md *block; + int freed_page_start, freed_page_end; int i, end; + bool reintegrate; lockdep_assert_held(&pcpu_lock); -restart: /* * Once a chunk is isolated to the to_depopulate list, the chunk is no * longer discoverable to allocations whom may populate pages. The only @@ -2157,6 +2161,9 @@ static void pcpu_reclaim_populated(void) * Scan chunk's pages in the reverse order to keep populated * pages close to the beginning of the chunk. */ + freed_page_start = chunk->nr_pages; + freed_page_end = 0; + reintegrate = false; for (i = chunk->nr_pages - 1, end = -1; i >= 0; i--) { /* no more work to do */ if (chunk->nr_empty_pop_pages == 0) @@ -2164,8 +2171,8 @@ static void pcpu_reclaim_populated(void) /* reintegrate chunk to prevent atomic alloc failures */ if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_HIGH) { - pcpu_reintegrate_chunk(chunk); - goto restart; + reintegrate = true; + goto end_chunk; } /* @@ -2194,16 +2201,29 @@ static void pcpu_reclaim_populated(void) spin_lock_irq(&pcpu_lock); pcpu_chunk_depopulated(chunk, i + 1, end + 1); + freed_page_start = min(freed_page_start, i + 1); + freed_page_end = max(freed_page_end, end + 1); /* reset the range and continue */ end = -1; } - if (chunk->free_bytes == pcpu_unit_size) +end_chunk: + /* batch tlb flush per chunk to amortize cost */ + if (freed_page_start < freed_page_end) { + spin_unlock_irq(&pcpu_lock); + pcpu_post_unmap_tlb_flush(chunk, + freed_page_start, + freed_page_end); + cond_resched(); + spin_lock_irq(&pcpu_lock); + } + + if (reintegrate || chunk->free_bytes == pcpu_unit_size) pcpu_reintegrate_chunk(chunk); else - list_move(&chunk->list, - &pcpu_chunk_lists[pcpu_sidelined_slot]); + list_move_tail(&chunk->list, + &pcpu_chunk_lists[pcpu_sidelined_slot]); } }