Skip to content

Commit 3040ecd

Browse files
James Hogangregkh
authored andcommitted
metag/usercopy: Fix src fixup in from user rapf loops
commit 2c0b1df88b987a12d95ea1d6beaf01894f3cc725 upstream. The fixup code to rewind the source pointer in __asm_copy_from_user_{32,64}bit_rapf_loop() always rewound the source by a single unit (4 or 8 bytes), however this is insufficient if the fault didn't occur on the first load in the loop, as the source pointer will have been incremented but nothing will have been stored until all 4 register [pairs] are loaded. Read the LSM_STEP field of TXSTATUS (which is already loaded into a register), a bit like the copy_to_user versions, to determine how many iterations of MGET[DL] have taken place, all of which need rewinding. Fixes: 373cd78 ("metag: Memory handling") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-metag@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent beb0ad9 commit 3040ecd

1 file changed

Lines changed: 28 additions & 8 deletions

File tree

arch/metag/lib/usercopy.c

Lines changed: 28 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -687,29 +687,49 @@ EXPORT_SYMBOL(__copy_user);
687687
*
688688
* Rationale:
689689
* A fault occurs while reading from user buffer, which is the
690-
* source. Since the fault is at a single address, we only
691-
* need to rewind by 8 bytes.
690+
* source.
692691
* Since we don't write to kernel buffer until we read first,
693692
* the kernel buffer is at the right state and needn't be
694-
* corrected.
693+
* corrected, but the source must be rewound to the beginning of
694+
* the block, which is LSM_STEP*8 bytes.
695+
* LSM_STEP is bits 10:8 in TXSTATUS which is already read
696+
* and stored in D0Ar2
697+
*
698+
* NOTE: If a fault occurs at the last operation in M{G,S}ETL
699+
* LSM_STEP will be 0. ie: we do 4 writes in our case, if
700+
* a fault happens at the 4th write, LSM_STEP will be 0
701+
* instead of 4. The code copes with that.
695702
*/
696703
#define __asm_copy_from_user_64bit_rapf_loop(to, from, ret, n, id) \
697704
__asm_copy_user_64bit_rapf_loop(to, from, ret, n, id, \
698-
"SUB %1, %1, #8\n")
705+
"LSR D0Ar2, D0Ar2, #5\n" \
706+
"ANDS D0Ar2, D0Ar2, #0x38\n" \
707+
"ADDZ D0Ar2, D0Ar2, #32\n" \
708+
"SUB %1, %1, D0Ar2\n")
699709

700710
/* rewind 'from' pointer when a fault occurs
701711
*
702712
* Rationale:
703713
* A fault occurs while reading from user buffer, which is the
704-
* source. Since the fault is at a single address, we only
705-
* need to rewind by 4 bytes.
714+
* source.
706715
* Since we don't write to kernel buffer until we read first,
707716
* the kernel buffer is at the right state and needn't be
708-
* corrected.
717+
* corrected, but the source must be rewound to the beginning of
718+
* the block, which is LSM_STEP*4 bytes.
719+
* LSM_STEP is bits 10:8 in TXSTATUS which is already read
720+
* and stored in D0Ar2
721+
*
722+
* NOTE: If a fault occurs at the last operation in M{G,S}ETL
723+
* LSM_STEP will be 0. ie: we do 4 writes in our case, if
724+
* a fault happens at the 4th write, LSM_STEP will be 0
725+
* instead of 4. The code copes with that.
709726
*/
710727
#define __asm_copy_from_user_32bit_rapf_loop(to, from, ret, n, id) \
711728
__asm_copy_user_32bit_rapf_loop(to, from, ret, n, id, \
712-
"SUB %1, %1, #4\n")
729+
"LSR D0Ar2, D0Ar2, #6\n" \
730+
"ANDS D0Ar2, D0Ar2, #0x1c\n" \
731+
"ADDZ D0Ar2, D0Ar2, #16\n" \
732+
"SUB %1, %1, D0Ar2\n")
713733

714734

715735
/*

0 commit comments

Comments
 (0)