Skip to content
  • Huang Ying's avatar
    mm: fix race between swapoff and mincore · e9adaf76
    Huang Ying authored
    Via commit 4b3ef9da ("mm/swap: split swap cache into 64MB trunks") on,
    after swapoff, the address_space associated with the swap device will be
    freed.  So swap_address_space() users which touch the address_space need
    some kind of mechanism to prevent the address_space from being freed
    during accessing.
    
    When mincore process unmapped range for swapped shmem pages, it doesn't
    hold the lock to prevent swap device from being swapoff.  So the following
    race is possible,
    
    CPU1					CPU2
    do_mincore()				swapoff()
      walk_page_range()
        mincore_unmapped_range()
          __mincore_unmapped_range
            mincore_page
    	  as = swap_address_space()
              ...				  exit_swap_address_space()
              ...				    kvfree(spaces)
    	  find_get_page(as)
    
    The address space may be accessed after being freed.
    
    To fix the race, get_swap_device()/put_swap_device() is used to enclose
    find_get_page() to check whether the swap entry is valid and prevent the
    swap device from being swapoff during accessing.
    
    Link: http://lkml.kernel.org/r/20180313012036.1597-1-ying.huang@intel.com
    Fixes: 4b3ef9da
    
     ("mm/swap: split swap cache into 64MB trunks")
    Signed-off-by: default avatar"Huang, Ying" <ying.huang@intel.com>
    Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Acked-by: default avatarMichal Hocko <mhocko@suse.com>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Hugh Dickins <hughd@google.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
    e9adaf76