Skip to content

Commit 9286385

Browse files
minchankgregkh
authored andcommitted
zram: do not use copy_page with non-page aligned address
commit d72e9a7a93e4f8e9e52491921d99e0c8aa89eb4e upstream. The copy_page is optimized memcpy for page-alinged address. If it is used with non-page aligned address, it can corrupt memory which means system corruption. With zram, it can happen with 1. 64K architecture 2. partial IO 3. slub debug Partial IO need to allocate a page and zram allocates it via kmalloc. With slub debug, kmalloc(PAGE_SIZE) doesn't return page-size aligned address. And finally, copy_page(mem, cmem) corrupts memory. So, this patch changes it to memcpy. Actuaully, we don't need to change zram_bvec_write part because zsmalloc returns page-aligned address in case of PAGE_SIZE class but it's not good to rely on the internal of zsmalloc. Note: When this patch is merged to stable, clear_page should be fixed, too. Unfortunately, recent zram removes it by "same page merge" feature so it's hard to backport this patch to -stable tree. I will handle it when I receive the mail from stable tree maintainer to merge this patch to backport. Fixes: 42e99bd ("zram: optimize memory operations with clear_page()/copy_page()") Link: http://lkml.kernel.org/r/1492042622-12074-2-git-send-email-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent c1fc1d2 commit 9286385

1 file changed

Lines changed: 3 additions & 3 deletions

File tree

drivers/block/zram/zram_drv.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -574,13 +574,13 @@ static int zram_decompress_page(struct zram *zram, char *mem, u32 index)
574574

575575
if (!handle || zram_test_flag(meta, index, ZRAM_ZERO)) {
576576
bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
577-
clear_page(mem);
577+
memset(mem, 0, PAGE_SIZE);
578578
return 0;
579579
}
580580

581581
cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO);
582582
if (size == PAGE_SIZE)
583-
copy_page(mem, cmem);
583+
memcpy(mem, cmem, PAGE_SIZE);
584584
else
585585
ret = zcomp_decompress(zram->comp, cmem, size, mem);
586586
zs_unmap_object(meta->mem_pool, handle);
@@ -738,7 +738,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
738738

739739
if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
740740
src = kmap_atomic(page);
741-
copy_page(cmem, src);
741+
memcpy(cmem, src, PAGE_SIZE);
742742
kunmap_atomic(src);
743743
} else {
744744
memcpy(cmem, src, clen);

0 commit comments

Comments
 (0)