From: Zachary Amsden <zach@vmware.com>

Subtle fix: load_TLS has been moved after saving %fs and %gs segments to avoid
creating non-reversible segments.  This could conceivably cause a bug if the
kernel ever needed to save and restore fs/gs from the NMI handler.  It
currently does not, but this is the safest approach to avoiding fs/gs
corruption.  SMIs are safe, since SMI saves the descriptor hidden state.

Signed-off-by: Zachary Amsden <zach@vmware.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 arch/i386/kernel/process.c |   19 ++++++++++++-------
 1 files changed, 12 insertions(+), 7 deletions(-)

diff -puN arch/i386/kernel/process.c~i386-load_tls-fix arch/i386/kernel/process.c
--- devel/arch/i386/kernel/process.c~i386-load_tls-fix	2005-07-30 00:28:07.000000000 -0700
+++ devel-akpm/arch/i386/kernel/process.c	2005-07-30 00:28:07.000000000 -0700
@@ -678,21 +678,26 @@ struct task_struct fastcall * __switch_t
 	__unlazy_fpu(prev_p);
 
 	/*
-	 * Reload esp0, LDT and the page table pointer:
+	 * Reload esp0.
 	 */
 	load_esp0(tss, next);
 
 	/*
-	 * Load the per-thread Thread-Local Storage descriptor.
+	 * Save away %fs and %gs. No need to save %es and %ds, as
+	 * those are always kernel segments while inside the kernel.
+	 * Doing this before setting the new TLS descriptors avoids
+	 * the situation where we temporarily have non-reloadable
+	 * segments in %fs and %gs.  This could be an issue if the
+	 * NMI handler ever used %fs or %gs (it does not today), or
+	 * if the kernel is running inside of a hypervisor layer.
 	 */
-	load_TLS(next, cpu);
+	savesegment(fs, prev->fs);
+	savesegment(gs, prev->gs);
 
 	/*
-	 * Save away %fs and %gs. No need to save %es and %ds, as
-	 * those are always kernel segments while inside the kernel.
+	 * Load the per-thread Thread-Local Storage descriptor.
 	 */
-	asm volatile("mov %%fs,%0":"=m" (prev->fs));
-	asm volatile("mov %%gs,%0":"=m" (prev->gs));
+	load_TLS(next, cpu);
 
 	/*
 	 * Restore %fs and %gs if needed.
_