Skip to content

Commit c576160

Browse files
minchankgregkh
authored andcommitted
mm: prevent double decrease of nr_reserved_highatomic
commit 4855e4a7f29d6d10b0b9c84e189c770c9a94e91e upstream. There is race between page freeing and unreserved highatomic. CPU 0 CPU 1 free_hot_cold_page mt = get_pfnblock_migratetype set_pcppage_migratetype(page, mt) unreserve_highatomic_pageblock spin_lock_irqsave(&zone->lock) move_freepages_block set_pageblock_migratetype(page) spin_unlock_irqrestore(&zone->lock) free_pcppages_bulk __free_one_page(mt) <- mt is stale By above race, a page on CPU 0 could go non-highorderatomic free list since the pageblock's type is changed. By that, unreserve logic of highorderatomic can decrease reserved count on a same pageblock severak times and then it will make mismatch between nr_reserved_highatomic and the number of reserved pageblock. So, this patch verifies whether the pageblock is highatomic or not and decrease the count only if the pageblock is highatomic. Link: http://lkml.kernel.org/r/1476259429-18279-3-git-send-email-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@techsingularity.net> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Sangseok Lee <sangseok.lee@lge.com> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Miles Chen <miles.chen@mediatek.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent 6ea627b commit c576160

1 file changed

Lines changed: 18 additions & 6 deletions

File tree

mm/page_alloc.c

Lines changed: 18 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1748,13 +1748,25 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
17481748
struct page, lru);
17491749

17501750
/*
1751-
* It should never happen but changes to locking could
1752-
* inadvertently allow a per-cpu drain to add pages
1753-
* to MIGRATE_HIGHATOMIC while unreserving so be safe
1754-
* and watch for underflows.
1751+
* In page freeing path, migratetype change is racy so
1752+
* we can counter several free pages in a pageblock
1753+
* in this loop althoug we changed the pageblock type
1754+
* from highatomic to ac->migratetype. So we should
1755+
* adjust the count once.
17551756
*/
1756-
zone->nr_reserved_highatomic -= min(pageblock_nr_pages,
1757-
zone->nr_reserved_highatomic);
1757+
if (get_pageblock_migratetype(page) ==
1758+
MIGRATE_HIGHATOMIC) {
1759+
/*
1760+
* It should never happen but changes to
1761+
* locking could inadvertently allow a per-cpu
1762+
* drain to add pages to MIGRATE_HIGHATOMIC
1763+
* while unreserving so be safe and watch for
1764+
* underflows.
1765+
*/
1766+
zone->nr_reserved_highatomic -= min(
1767+
pageblock_nr_pages,
1768+
zone->nr_reserved_highatomic);
1769+
}
17581770

17591771
/*
17601772
* Convert to ac->migratetype and avoid the normal

0 commit comments

Comments
 (0)