Skip to content
  • Wei Yang's avatar
    mm/slub.c: improve performance by skipping checked node in get_any_partial() · 9b0317d0
    Wei Yang authored
    1. Background
    
      Current slub has three layers:
    
        * cpu_slab
        * percpu_partial
        * per node partial list
    
      Slub allocator tries to get an object from top to bottom.  When it
      can't get an object from the upper two layers, it will search the per
      node partial list.  The is done in get_partial().
    
      The abstraction of get_partial() look like this:
    
          get_partial()
              get_partial_node()
              get_any_partial()
                  for_each_zone_zonelist()
    
      The idea behind this is: first try a local node, then try other nodes
      if caller doesn't specify a node.
    
    2. Room for Improvement
    
      When we look one step deeper in get_any_partial(), it tries to get a
      proper node by for_each_zone_zonelist(), which iterates on the
      node_zonelists.
    
      This behavior would introduce some redundant check on the same node.
      Because:
    
        * the local node is already checked in get_partial_node()
        * one node may have several zones on node_zonelists
    
    3. Solution Proposed in Patch
    
      We could reduce these redundant check by record the last unsuccessful
      node and then skip it.
    
    4. Tests & Result
    
      After some tests, the result shows this may improve the system a little,
      especially on a machine with only one node.
    
    4.1 Test Description
    
      There are two cases for two system configurations.
    
      Test Cases:
    
        1. counter comparison
        2. kernel build test
    
      System Configuration:
    
        1. One node machine with 4G
        2. Four node machine with 8G
    
    4.2 Result for Test 1
    
      Test 1: counter comparison
    
      This is a test with hacked kernel to record times function
      get_any_partial() is invoked and times the inner loop iterates. By
      comparing the ratio of two counters, we get to know how many inner
      loops we skipped.
    
      Here is a snip of the test patch.
    
      ---
      static void *get_any_partial() {
    
    	get_partial_count++;
    
            do {
    		for_each_zone_zonelist() {
    			get_partial_try_count++;
    		}
    	} while();
    
    	return NULL;
      }
      ---
    
      The result of (get_partial_count / get_partial_try_count):
    
       +----------+----------------+------------+-------------+
       |          |       Base     |    Patched |  Improvement|
       +----------+----------------+------------+-------------+
       |One Node  |       1:3      |    1:0     |      - 100% |
       +----------+----------------+------------+-------------+
       |Four Nodes|       1:5.8    |    1:2.5   |      -  56% |
       +----------+----------------+------------+-------------+
    
    4.3 Result for Test 2
    
      Test 2: kernel build
    
       Command used:
    
       > time make -j8 bzImage
    
       Each version/system configuration combination has four round kernel
       build tests. Take the average result of real to compare.
    
       +----------+----------------+------------+-------------+
       |          |       Base     |   Patched  |  Improvement|
       +----------+----------------+------------+-------------+
       |One Node  |      4m41s     |   4m32s    |     - 4.47% |
       +----------+----------------+------------+-------------+
       |Four Nodes|      4m45s     |   4m39s    |     - 2.92% |
       +----------+----------------+------------+-------------+
    
    [akpm@linux-foundation.org: rename variable, tweak comment]
    Link: http://lkml.kernel.org/r/20181120033119.30013-1-richard.weiyang@gmail.com
    
    
    Signed-off-by: default avatarWei Yang <richard.weiyang@gmail.com>
    Cc: Christoph Lameter <cl@linux.com>
    Cc: Pekka Enberg <penberg@kernel.org>
    Cc: David Rientjes <rientjes@google.com>
    Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
    9b0317d0