Commit 192d7232 authored by Minchan Kim's avatar Minchan Kim Committed by Linus Torvalds

mm: make try_to_munlock() return void

try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped the page has
VM_LOCKED flag.  In that time, VM set PG_mlocked to the page if the page
is not pte-mapped THP which cannot be mlocked, either.

With that, __munlock_isolated_page can use PageMlocked to check whether
try_to_munlock is successful or not without relying on try_to_munlock's
retval.  It helps to make try_to_unmap/try_to_unmap_one simple with
upcoming patches.

[ remove PG_Mlocked VM_BUG_ON check]
Link: 's avatarMinchan Kim <>
Acked-by: 's avatarKirill A. Shutemov <>
Acked-by: 's avatarVlastimil Babka <>
Cc: Anshuman Khandual <>
Cc: Hillf Danton <>
Cc: Johannes Weiner <>
Cc: Michal Hocko <>
Cc: Naoya Horiguchi <>
Cc: Sasha Levin <>
Signed-off-by: 's avatarAndrew Morton <>
Signed-off-by: 's avatarLinus Torvalds <>
parent 22ffb33f
......@@ -235,7 +235,7 @@ int page_mkclean(struct page *);
* called in munlock()/munmap() path to check for other vmas holding
* the page mlocked.
int try_to_munlock(struct page *);
void try_to_munlock(struct page *);
void remove_migration_ptes(struct page *old, struct page *new, bool locked);
......@@ -123,17 +123,15 @@ static bool __munlock_isolate_lru_page(struct page *page, bool getpage)
static void __munlock_isolated_page(struct page *page)
int ret = SWAP_AGAIN;
* Optimization: if the page was mapped just once, that's our mapping
* and we don't need to check all the other vmas.
if (page_mapcount(page) > 1)
ret = try_to_munlock(page);
/* Did try_to_unlock() succeed or punt? */
if (ret != SWAP_MLOCK)
if (!PageMlocked(page))
......@@ -1552,18 +1552,10 @@ static int page_not_mapped(struct page *page)
* Called from munlock code. Checks all of the VMAs mapping the page
* to make sure nobody else has this page mlocked. The page will be
* returned with PG_mlocked cleared if no other vmas have it mlocked.
* Return values are:
* SWAP_AGAIN - no vma is holding page mlocked, or,
* SWAP_AGAIN - page mapped in mlocked vma -- couldn't acquire mmap sem
* SWAP_FAIL - page cannot be located at present
* SWAP_MLOCK - page is now mlocked.
int try_to_munlock(struct page *page)
int ret;
void try_to_munlock(struct page *page)
struct rmap_walk_control rwc = {
.rmap_one = try_to_unmap_one,
.arg = (void *)TTU_MUNLOCK,
......@@ -1573,9 +1565,9 @@ int try_to_munlock(struct page *page)
VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
VM_BUG_ON_PAGE(PageCompound(page) && PageDoubleMap(page), page);
ret = rmap_walk(page, &rwc);
return ret;
rmap_walk(page, &rwc);
void __put_anon_vma(struct anon_vma *anon_vma)
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment