From: Heiko Carstens <heiko.carstens@de.ibm.com>

On s390 the lock value used for spinlocks consists of the lower 32 bits of the
PSW that holds the lock.  If this address happens to be on a four gigabyte
boundary the lock is left unlocked.  This allows other cpus to grab the same
lock and enter a lock protected code path concurrently.  In theory this can
happen if the vmalloc area for the code of a module crosses a 4 GB boundary.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 include/asm-s390/spinlock.h |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff -puN include/asm-s390/spinlock.h~s390-spinlock-corner-case include/asm-s390/spinlock.h
--- devel/include/asm-s390/spinlock.h~s390-spinlock-corner-case	2005-08-29 23:58:37.000000000 -0700
+++ devel-akpm/include/asm-s390/spinlock.h	2005-08-29 23:58:37.000000000 -0700
@@ -47,7 +47,7 @@ extern int _raw_spin_trylock_retry(spinl
 
 static inline void _raw_spin_lock(spinlock_t *lp)
 {
-	unsigned long pc = (unsigned long) __builtin_return_address(0);
+	unsigned long pc = 1 | (unsigned long) __builtin_return_address(0);
 
 	if (unlikely(_raw_compare_and_swap(&lp->lock, 0, pc) != 0))
 		_raw_spin_lock_wait(lp, pc);
@@ -55,7 +55,7 @@ static inline void _raw_spin_lock(spinlo
 
 static inline int _raw_spin_trylock(spinlock_t *lp)
 {
-	unsigned long pc = (unsigned long) __builtin_return_address(0);
+	unsigned long pc = 1 | (unsigned long) __builtin_return_address(0);
 
 	if (likely(_raw_compare_and_swap(&lp->lock, 0, pc) == 0))
 		return 1;
_