From: David Gibson <david@gibson.dropbear.id.au>

Currently, calling msync() on a hugepage area will cause the kernel to blow
up with a bad_page() (at least on ppc64, but I think the problem will exist
on other archs too).  The msync path attempts to walk pagetables which may
not be there, or may have an unusual layout for hugepages.

Lucikly we shouldn't need to do anything for an msync on hugetlbfs beyond
flushing the cache, so this patch should be sufficient to fix the problem.

Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 25-akpm/mm/msync.c |   10 +++++++++-
 1 files changed, 9 insertions(+), 1 deletion(-)

diff -puN mm/msync.c~hugetlb-msync-fix mm/msync.c
--- 25/mm/msync.c~hugetlb-msync-fix	2004-06-02 00:48:40.301025592 -0700
+++ 25-akpm/mm/msync.c	2004-06-02 00:48:40.305024984 -0700
@@ -11,6 +11,7 @@
 #include <linux/pagemap.h>
 #include <linux/mm.h>
 #include <linux/mman.h>
+#include <linux/hugetlb.h>
 
 #include <asm/pgtable.h>
 #include <asm/tlbflush.h>
@@ -105,6 +106,13 @@ static int filemap_sync(struct vm_area_s
 
 	dir = pgd_offset(vma->vm_mm, address);
 	flush_cache_range(vma, address, end);
+
+	/* For hugepages we can't go walking the page table normally,
+	 * but that's ok, hugetlbfs is memory based, so we don't need
+	 * to do anything more on an msync() */
+	if (is_vm_hugetlb_page(vma))
+		goto out;
+
 	if (address >= end)
 		BUG();
 	do {
@@ -117,7 +125,7 @@ static int filemap_sync(struct vm_area_s
 	 * dirty bits.
 	 */
 	flush_tlb_range(vma, end - size, end);
-
+ out:
 	spin_unlock(&vma->vm_mm->page_table_lock);
 
 	return error;
_