aboutsummaryrefslogtreecommitdiffstats
path: root/mm/util.c
diff options
context:
space:
mode:
authorDavid Hildenbrand <david@redhat.com>2025-09-01 17:03:45 +0200
committerAndrew Morton <akpm@linux-foundation.org>2025-09-21 14:22:06 -0700
commit80e7bb74d4ff24725f0ddb1c72d8de45a3d975f6 (patch)
tree33bc0d3e966f6a54c909be8221d2f95ac87db75d /mm/util.c
parenta16c46c2402026162111ed9fd1fc28d25223443e (diff)
downloadlinux-80e7bb74d4ff24725f0ddb1c72d8de45a3d975f6.tar.gz
scatterlist: disallow non-contigous page ranges in a single SG entry
The expectation is that there is currently no user that would pass in non-contigous page ranges: no allocator, not even VMA, will hand these out. The only problematic part would be if someone would provide a range obtained directly from memblock, or manually merge problematic ranges. If we find such cases, we should fix them to create separate SG entries. Let's check in sg_set_page() that this is really the case. No need to check in sg_set_folio(), as pages in a folio are guaranteed to be contiguous. As sg_set_page() gets inlined into modules, we have to export the page_range_contiguous() helper -- use EXPORT_SYMBOL, there is nothing special about this helper such that we would want to enforce GPL-only modules. We can now drop the nth_page() usage in sg_page_iter_page(). Link: https://lkml.kernel.org/r/20250901150359.867252-25-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/util.c')
-rw-r--r--mm/util.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/mm/util.c b/mm/util.c
index 4b9d40c7128681..e29d3310e26bb6 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -1315,4 +1315,5 @@ bool page_range_contiguous(const struct page *page, unsigned long nr_pages)
return false;
return true;
}
+EXPORT_SYMBOL(page_range_contiguous);
#endif