aboutsummaryrefslogtreecommitdiffstats
path: root/mm/migrate.c
diff options
context:
space:
mode:
authorHyeonggon Yoo <42.hyeyoo@gmail.com>2024-12-10 21:48:07 +0900
committerAndrew Morton <akpm@linux-foundation.org>2025-01-13 22:40:58 -0800
commit8c6e2d122b71b8cca7b215bca1cd586a1c09999a (patch)
tree98ed26156e992bc4c2c5f45b9f317d7663f5b02e /mm/migrate.c
parentsamples/damon/prcl: implement schemes setup (diff)
downloadlinux-8c6e2d122b71b8cca7b215bca1cd586a1c09999a.tar.gz
linux-8c6e2d122b71b8cca7b215bca1cd586a1c09999a.zip
mm/migrate: remove slab checks in isolate_movable_page()
Commit 8b8817630ae8 ("mm/migrate: make isolate_movable_page() skip slab pages") introduced slab checks to prevent mis-identification of slab pages as movable kernel pages. However, after Matthew's frozen folio series, these slab checks became unnecessary as the migration logic fails to increase the reference count for frozen slab folios. Remove these redundant slab checks and associated memory barriers. Link: https://lkml.kernel.org/r/20241210124807.8584-1-42.hyeyoo@gmail.com Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: David Rientjes <rientjes@google.com> Cc: Christoph Lameter <cl@linux.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/migrate.c')
-rw-r--r--mm/migrate.c8
1 files changed, 0 insertions, 8 deletions
diff --git a/mm/migrate.c b/mm/migrate.c
index e9e00d1d1d19..32cc8e0b1cce 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -68,10 +68,6 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
if (!folio)
goto out;
- if (unlikely(folio_test_slab(folio)))
- goto out_putfolio;
- /* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */
- smp_rmb();
/*
* Check movable flag before taking the page lock because
* we use non-atomic bitops on newly allocated page flags so
@@ -79,10 +75,6 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
*/
if (unlikely(!__folio_test_movable(folio)))
goto out_putfolio;
- /* Pairs with smp_wmb() in slab allocation, e.g. SLUB's alloc_slab_page() */
- smp_rmb();
- if (unlikely(folio_test_slab(folio)))
- goto out_putfolio;
/*
* As movable pages are not isolated from LRU lists, concurrent