Commit 57e68e9c authored by Vlastimil Babka's avatar Vlastimil Babka Committed by Linus Torvalds

mm: try_to_unmap_cluster() should lock_page() before mlocking

A BUG_ON(!PageLocked) was triggered in mlock_vma_page() by Sasha Levin
fuzzing with trinity.  The call site try_to_unmap_cluster() does not lock
the pages other than its check_page parameter (which is already locked).

The BUG_ON in mlock_vma_page() is not documented and its purpose is
somewhat unclear, but apparently it serializes against page migration,
which could otherwise fail to transfer the PG_mlocked flag.  This would
not be fatal, as the page would be eventually encountered again, but
NR_MLOCK accounting would become distorted nevertheless.  This patch adds
a comment to the BUG_ON in mlock_vma_page() and munlock_vma_page() to that

The call site try_to_unmap_cluster() is fixed so that for page !=
check_page, trylock_page() is attempted (to avoid possible deadlocks as we
already have check_page locked) and mlock_vma_page() is performed only
upon success.  If the page lock cannot be obtained, the page is left
without PG_mlocked, which is again not a problem in the whole unevictable
memory design.
Signed-off-by: 's avatarVlastimil Babka <>
Signed-off-by: 's avatarBob Liu <>
Reported-by: 's avatarSasha Levin <>
Cc: Wanpeng Li <>
Cc: Michel Lespinasse <>
Cc: KOSAKI Motohiro <>
Acked-by: 's avatarRik van Riel <>
Cc: David Rientjes <>
Cc: Mel Gorman <>
Cc: Hugh Dickins <>
Cc: Joonsoo Kim <>
Cc: <>
Signed-off-by: 's avatarAndrew Morton <>
Signed-off-by: 's avatarLinus Torvalds <>
parent 3a025760
......@@ -79,6 +79,7 @@ void clear_page_mlock(struct page *page)
void mlock_vma_page(struct page *page)
/* Serialize with page migration */
if (!TestSetPageMlocked(page)) {
......@@ -174,6 +175,7 @@ unsigned int munlock_vma_page(struct page *page)
unsigned int nr_pages;
struct zone *zone = page_zone(page);
/* For try_to_munlock() and to serialize with page migration */
......@@ -1332,9 +1332,19 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
BUG_ON(!page || PageAnon(page));
if (locked_vma) {
mlock_vma_page(page); /* no-op if already mlocked */
if (page == check_page)
if (page == check_page) {
/* we know we have check_page locked */
} else if (trylock_page(page)) {
* If we can lock the page, perform mlock.
* Otherwise leave the page alone, it will be
* eventually encountered again later.
continue; /* don't unmap */
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment