Skip to content
  • Mike Rapoport's avatar
    memblock: stop using implicit alignment to SMP_CACHE_BYTES · 7e1c4e27
    Mike Rapoport authored
    When a memblock allocation APIs are called with align = 0, the alignment
    is implicitly set to SMP_CACHE_BYTES.
    
    Implicit alignment is done deep in the memblock allocator and it can
    come as a surprise.  Not that such an alignment would be wrong even
    when used incorrectly but it is better to be explicit for the sake of
    clarity and the prinicple of the least surprise.
    
    Replace all such uses of memblock APIs with the 'align' parameter
    explicitly set to SMP_CACHE_BYTES and stop implicit alignment assignment
    in the memblock internal allocation functions.
    
    For the case when memblock APIs are used via helper functions, e.g.  like
    iommu_arena_new_node() in Alpha, the helper functions were detected with
    Coccinelle's help and then manually examined and updated where
    appropriate.
    
    The direct memblock APIs users were updated using the semantic patch below:
    
    @@
    expression size, min_addr, max_addr, nid;
    @@
    (
    |
    - memblock_alloc_try_nid_raw(size, 0, min_addr, max_addr, nid)
    + memblock_alloc_try_nid_raw(size, SMP_CACHE_BYTES, min_addr, max_addr,
    nid)
    |
    - memblock_alloc_try_nid_nopanic(size, 0, min_addr, max_addr, nid)
    + memblock_alloc_try_nid_nopanic(size, SMP_CACHE_BYTES, min_addr, max_addr,
    nid)
    |
    - memblock_alloc_try_nid(size, 0, min_addr, max_addr, nid)
    + memblock_alloc_try_nid(size, SMP_CACHE_BYTES, min_addr, max_addr, nid)
    |
    - memblock_alloc(size, 0)
    + memblock_alloc(size, SMP_CACHE_BYTES)
    |
    - memblock_alloc_raw(size, 0)
    + memblock_alloc_raw(size, SMP_CACHE_BYTES)
    |
    - memblock_alloc_from(size, 0, min_addr)
    + memblock_alloc_from(size, SMP_CACHE_BYTES, min_addr)
    |
    - memblock_alloc_nopanic(size, 0)
    + memblock_alloc_nopanic(size, SMP_CACHE_BYTES)
    |
    - memblock_alloc_low(size, 0)
    + memblock_alloc_low(size, SMP_CACHE_BYTES)
    |
    - memblock_alloc_low_nopanic(size, 0)
    + memblock_alloc_low_nopanic(size, SMP_CACHE_BYTES)
    |
    - memblock_alloc_from_nopanic(size, 0, min_addr)
    + memblock_alloc_from_nopanic(size, SMP_CACHE_BYTES, min_addr)
    |
    - memblock_alloc_node(size, 0, nid)
    + memblock_alloc_node(size, SMP_CACHE_BYTES, nid)
    )
    
    [mhocko@suse.com: changelog update]
    [akpm@linux-foundation.org: coding-style fixes]
    [rppt@linux.ibm.com: fix missed uses of implicit alignment]
      Link: http://lkml.kernel.org/r/20181016133656.GA10925@rapoport-lnx
    Link: http://lkml.kernel.org/r/1538687224-17535-1-git-send-email-rppt@linux.vnet.ibm.com
    
    
    Signed-off-by: default avatarMike Rapoport <rppt@linux.vnet.ibm.com>
    Suggested-by: default avatarMichal Hocko <mhocko@suse.com>
    Acked-by: Paul Burton <paul.burton@mips.com>	[MIPS]
    Acked-by: Michael Ellerman <mpe@ellerman.id.au>	[powerpc]
    Acked-by: default avatarMichal Hocko <mhocko@suse.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Chris Zankel <chris@zankel.net>
    Cc: Geert Uytterhoeven <geert@linux-m68k.org>
    Cc: Guan Xuetao <gxt@pku.edu.cn>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Matt Turner <mattst88@gmail.com>
    Cc: Michal Simek <monstr@monstr.eu>
    Cc: Richard Weinberger <richard@nod.at>
    Cc: Russell King <linux@armlinux.org.uk>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Tony Luck <tony.luck@intel.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    7e1c4e27