Skip to content

Commit 3ad78ba

Browse files
keesAlex Shi
authored andcommitted
mm: SLUB hardened usercopy support
Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the SLUB allocator to catch any copies that may span objects. Includes a redzone handling fix discovered by Michael Ellerman. Based on code from PaX and grsecurity. Signed-off-by: Kees Cook <keescook@chromium.org> Tested-by: Michael Ellerman <mpe@ellerman.id.au> Reviwed-by: Laura Abbott <labbott@redhat.com> (cherry picked from commit ed18adc1cdd00a5c55a20fbdaed4804660772281) Signed-off-by: Alex Shi <alex.shi@linaro.org>
1 parent 784bd0f commit 3ad78ba

2 files changed

Lines changed: 41 additions & 0 deletions

File tree

init/Kconfig

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1727,6 +1727,7 @@ config SLAB
17271727

17281728
config SLUB
17291729
bool "SLUB (Unqueued Allocator)"
1730+
select HAVE_HARDENED_USERCOPY_ALLOCATOR
17301731
help
17311732
SLUB is a slab allocator that minimizes cache line usage
17321733
instead of managing queues of cached objects (SLAB approach).

mm/slub.c

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3585,6 +3585,46 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
35853585
EXPORT_SYMBOL(__kmalloc_node);
35863586
#endif
35873587

3588+
#ifdef CONFIG_HARDENED_USERCOPY
3589+
/*
3590+
* Rejects objects that are incorrectly sized.
3591+
*
3592+
* Returns NULL if check passes, otherwise const char * to name of cache
3593+
* to indicate an error.
3594+
*/
3595+
const char *__check_heap_object(const void *ptr, unsigned long n,
3596+
struct page *page)
3597+
{
3598+
struct kmem_cache *s;
3599+
unsigned long offset;
3600+
size_t object_size;
3601+
3602+
/* Find object and usable object size. */
3603+
s = page->slab_cache;
3604+
object_size = slab_ksize(s);
3605+
3606+
/* Reject impossible pointers. */
3607+
if (ptr < page_address(page))
3608+
return s->name;
3609+
3610+
/* Find offset within object. */
3611+
offset = (ptr - page_address(page)) % s->size;
3612+
3613+
/* Adjust for redzone and reject if within the redzone. */
3614+
if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
3615+
if (offset < s->red_left_pad)
3616+
return s->name;
3617+
offset -= s->red_left_pad;
3618+
}
3619+
3620+
/* Allow address range falling entirely within object size. */
3621+
if (offset <= object_size && n <= object_size - offset)
3622+
return NULL;
3623+
3624+
return s->name;
3625+
}
3626+
#endif /* CONFIG_HARDENED_USERCOPY */
3627+
35883628
static size_t __ksize(const void *object)
35893629
{
35903630
struct page *page;

0 commit comments

Comments
 (0)