Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp602961imm; Wed, 10 Oct 2018 01:00:53 -0700 (PDT) X-Google-Smtp-Source: ACcGV630i1Sc5Iu3+uQCudtvq8WgH8UgmSih2O8GsvuP6z5jBxedFdZ3atnCNXfH2z0yxfK31cOE X-Received: by 2002:a63:82c6:: with SMTP id w189-v6mr28654411pgd.211.1539158453916; Wed, 10 Oct 2018 01:00:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539158453; cv=none; d=google.com; s=arc-20160816; b=t9j+FPR0tEvBjF71VeyL3fg849P4c2akdcT7CfCwNCV0xQrlCCJiW2Ns/jMmnTTUVz oEFGRBl3Ut7kuy1yvBUiog8QJ5b0z2/i2rBue8taEGlvCVMVrZFuOIaKjvUlON7+1v5w 0o+fS2OoH9h6WMyLGqCN6SEyJbRQ3XYCi0JSNtQdaJbOhGjJx2smHiiBjS/m2G0oVy1y XvRhva72gpCB95KHULIMGBIY2Ae70oDmaaDwBRuZWDdTgAq95cckzVI9xbPuFZkWCEAT XovvDR/cIn7jU3ZU2G/7+xydN8DdodhLdkgIlFtcAqErZddWVz/neQfxtnDUG7Dew702 70vw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=4PU59IAVGX9+zCHclaD4c1UiKiKXFHztz0xURFdskSM=; b=hz+enm3G5dmrpucDX08ovpj4vW0nTaBV45MQJu6nDtOOHnUsqO1ExW676pxwsLJ1tE P0LETjORLYcFpCmgqkQPd8VKp0V6a82tyET1Fpr32KIv5sVINW/dAuSKvinLOnMFiN67 kjRj3wQc5TVIuS/KPieK9STEnCEtJeOXSnDvSp5W7AjBxoSZuzZOA6hywLrRhaUD8kqk WeidDWicrAYaeP9fYc5bg27KRjAO3F5jzB2/hDOk9tbb6L/keLv5mVo4j79CHqGDAsXI motpYFy9RbMLO1OQGIMJIxjARQWS7ySZTowpz5K1zrNJS2PQfgIx/SkrkdVchVGZ0pcn y7Cg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id gn7si22823092plb.264.2018.10.10.01.00.38; Wed, 10 Oct 2018 01:00:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726918AbeJJPTs (ORCPT + 99 others); Wed, 10 Oct 2018 11:19:48 -0400 Received: from mx2.suse.de ([195.135.220.15]:35984 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726670AbeJJPTs (ORCPT ); Wed, 10 Oct 2018 11:19:48 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id BC818AC68; Wed, 10 Oct 2018 07:58:47 +0000 (UTC) Date: Wed, 10 Oct 2018 09:58:44 +0200 From: Michal Hocko To: Mike Rapoport Cc: linux-mm@kvack.org, Andrew Morton , Catalin Marinas , Chris Zankel , Geert Uytterhoeven , Guan Xuetao , Ingo Molnar , Matt Turner , Michael Ellerman , Michal Simek , Paul Burton , Richard Weinberger , Russell King , Thomas Gleixner , Tony Luck , linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@linux-mips.org, linuxppc-dev@lists.ozlabs.org, linux-um@lists.infradead.org Subject: Re: [PATCH] memblock: stop using implicit alignement to SMP_CACHE_BYTES Message-ID: <20181010075844.GA5873@dhcp22.suse.cz> References: <1538687224-17535-1-git-send-email-rppt@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1538687224-17535-1-git-send-email-rppt@linux.vnet.ibm.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri 05-10-18 00:07:04, Mike Rapoport wrote: > When a memblock allocation APIs are called with align = 0, the alignment is > implicitly set to SMP_CACHE_BYTES. I would add something like " Implicit alignment is done deep in the memblock allocator and it can come as a surprise. Not that such an alignment would be wrong even when used incorrectly but it is better to be explicit for the sake of clarity and the prinicple of the least surprise. " > Replace all such uses of memblock APIs with the 'align' parameter explicitly > set to SMP_CACHE_BYTES and stop implicit alignment assignment in the > memblock internal allocation functions. > > For the case when memblock APIs are used via helper functions, e.g. like > iommu_arena_new_node() in Alpha, the helper functions were detected with > Coccinelle's help and then manually examined and updated where appropriate. > > The direct memblock APIs users were updated using the semantic patch below: > > @@ > expression size, min_addr, max_addr, nid; > @@ > ( > | > - memblock_alloc_try_nid_raw(size, 0, min_addr, max_addr, nid) > + memblock_alloc_try_nid_raw(size, SMP_CACHE_BYTES, min_addr, max_addr, > nid) > | > - memblock_alloc_try_nid_nopanic(size, 0, min_addr, max_addr, nid) > + memblock_alloc_try_nid_nopanic(size, SMP_CACHE_BYTES, min_addr, max_addr, > nid) > | > - memblock_alloc_try_nid(size, 0, min_addr, max_addr, nid) > + memblock_alloc_try_nid(size, SMP_CACHE_BYTES, min_addr, max_addr, nid) > | > - memblock_alloc(size, 0) > + memblock_alloc(size, SMP_CACHE_BYTES) > | > - memblock_alloc_raw(size, 0) > + memblock_alloc_raw(size, SMP_CACHE_BYTES) > | > - memblock_alloc_from(size, 0, min_addr) > + memblock_alloc_from(size, SMP_CACHE_BYTES, min_addr) > | > - memblock_alloc_nopanic(size, 0) > + memblock_alloc_nopanic(size, SMP_CACHE_BYTES) > | > - memblock_alloc_low(size, 0) > + memblock_alloc_low(size, SMP_CACHE_BYTES) > | > - memblock_alloc_low_nopanic(size, 0) > + memblock_alloc_low_nopanic(size, SMP_CACHE_BYTES) > | > - memblock_alloc_from_nopanic(size, 0, min_addr) > + memblock_alloc_from_nopanic(size, SMP_CACHE_BYTES, min_addr) > | > - memblock_alloc_node(size, 0, nid) > + memblock_alloc_node(size, SMP_CACHE_BYTES, nid) > ) > > Suggested-by: Michal Hocko > Signed-off-by: Mike Rapoport I do agree that this is an improvement. I would also add WARN_ON_ONCE on 0 alignment to catch some left overs. If we ever grown a user which would explicitly require the zero alignment (I would be surprised) then we can remove the warning. Acked-by: Michal Hocko -- Michal Hocko SUSE Labs