From: Christoph Lameter <christoph@lameter.com>

The problem is that the slab allocator cannot correctly generate the
initial two slabs if the size of the second slab is forced to be PAGE_SIZE
by CONFIG_DEBUG_PAGEALLOC.  

If the size is forced to PAGE_SIZE then kmem_cache_create will consult
offslab_limit to figure out if the order of the alloc need to be reduced. 
This is always true since offslab_limit is zero at this point.  Thus the
allocation order is reduced from order 0 to order -1.  Kaboom...

The patch increases the limit for slab sizes that use PAGE_SIZE if
CONFIG_DEBUG_PAGEALLOC is set to the cache size one higher than the cache
size used for struct kmem_list3.  Then offslab_limit is not used for the
initial two slabs in kmem_cache_init.

Signed-off-by: Christoph Lameter <christoph@lameter.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 mm/slab.c |    2 +-
 1 files changed, 1 insertion(+), 1 deletion(-)

diff -puN mm/slab.c~numa-aware-slab-allocator-v5-fix mm/slab.c
--- devel/mm/slab.c~numa-aware-slab-allocator-v5-fix	2005-07-13 23:35:29.000000000 -0700
+++ devel-akpm/mm/slab.c	2005-07-13 23:35:29.000000000 -0700
@@ -1616,7 +1616,7 @@ kmem_cache_create (const char *name, siz
 		size += BYTES_PER_WORD;
 	}
 #if FORCED_DEBUG && defined(CONFIG_DEBUG_PAGEALLOC)
-	if (size > 128 && cachep->reallen > cache_line_size() && size < PAGE_SIZE) {
+	if (size >= malloc_sizes[INDEX_L3+1].cs_size && cachep->reallen > cache_line_size() && size < PAGE_SIZE) {
 		cachep->dbghead += PAGE_SIZE - size;
 		size = PAGE_SIZE;
 	}
_