• Kirill A. Shutemov's avatar
    thp: fix another corner case of munlock() vs. THPs · 6ebb4a1b
    Kirill A. Shutemov authored
    The following test case triggers BUG() in munlock_vma_pages_range():
    
    	int main(int argc, char *argv[])
    	{
    		int fd;
    
    		system("mount -t tmpfs -o huge=always none /mnt");
    		fd = open("/mnt/test", O_CREAT | O_RDWR);
    		ftruncate(fd, 4UL << 20);
    		mmap(NULL, 4UL << 20, PROT_READ | PROT_WRITE,
    				MAP_SHARED | MAP_FIXED | MAP_LOCKED, fd, 0);
    		mmap(NULL, 4096, PROT_READ | PROT_WRITE,
    				MAP_SHARED | MAP_LOCKED, fd, 0);
    		munlockall();
    		return 0;
    	}
    
    The second mmap() create PTE-mapping of the first huge page in file.  It
    makes kernel munlock the page as we never keep PTE-mapped page mlocked.
    
    On munlockall() when we handle vma created by the first mmap(),
    munlock_vma_page() returns page_mask == 0, as the page is not mlocked
    anymore.  On next iteration follow_page_mask() return tail page, but
    page_mask is HPAGE_NR_PAGES - 1.  It makes us skip to the first tail
    page of the next huge page and step on
    VM_BUG_ON_PAGE(PageMlocked(page)).
    
    The fix is not use the page_mask from follow_page_mask() at all.  It has
    no use for us.
    
    Link: http://lkml.kernel.org/r/20170302150252.34120-1-kirill.shutemov@linux.intel.comSigned-off-by: 's avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: <stable@vger.kernel.org>    [4.5+]
    Signed-off-by: 's avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: 's avatarLinus Torvalds <torvalds@linux-foundation.org>
    6ebb4a1b
mlock.c 22.6 KB