Commit e3ae1953 authored by Kirill A. Shutemov's avatar Kirill A. Shutemov Committed by Linus Torvalds

thp: limit number of object to scan on deferred_split_scan()

If we have a lot of pages in queue to be split, deferred_split_scan()
can spend unreasonable amount of time under spinlock with disabled

Let's cap number of pages to split on scan by sc->nr_to_scan.
Signed-off-by: default avatarKirill A. Shutemov <>
Reported-by: default avatarAndrea Arcangeli <>
Reviewed-by: default avatarAndrea Arcangeli <>
Cc: Hugh Dickins <>
Cc: Dave Hansen <>
Cc: Mel Gorman <>
Cc: Rik van Riel <>
Cc: Vlastimil Babka <>
Cc: "Aneesh Kumar K.V" <>
Cc: Johannes Weiner <>
Cc: Michal Hocko <>
Cc: Jerome Marchand <>
Cc: Sasha Levin <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent cb8d68ec
......@@ -3478,17 +3478,19 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
int split = 0;
spin_lock_irqsave(&pgdata->split_queue_lock, flags);
list_splice_init(&pgdata->split_queue, &list);
/* Take pin on all head pages to avoid freeing them under us */
list_for_each_safe(pos, next, &list) {
page = list_entry((void *)pos, struct page, mapping);
page = compound_head(page);
/* race with put_compound_page() */
if (!get_page_unless_zero(page)) {
if (get_page_unless_zero(page)) {
list_move(page_deferred_list(page), &list);
} else {
/* We lost race with put_compound_page() */
if (!--sc->nr_to_scan)
spin_unlock_irqrestore(&pgdata->split_queue_lock, flags);
Markdown is supported
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment