Sometimes kjournald has to refile a huge number of buffers, because someone
else wrote them out beforehand - they are all clean.

This happens under a lock and scheduling latencies of 88 milliseconds on a
2.7GHx CPU were observed.

The patch forward-ports a little bit of the 2.4 low-latency patch to fix this
problem.

Worst-case on ext3 is now sub-half-millisecond, except for when the RCU
dentry reaping softirq cuts in :(



 fs/jbd/commit.c |   13 +++++++++++--
 1 files changed, 11 insertions(+), 2 deletions(-)

diff -puN fs/jbd/commit.c~ext3-latency-fix fs/jbd/commit.c
--- 25/fs/jbd/commit.c~ext3-latency-fix	2003-10-19 01:33:54.000000000 -0700
+++ 25-akpm/fs/jbd/commit.c	2003-10-19 01:33:54.000000000 -0700
@@ -264,6 +264,16 @@ write_out_data_locked:
 				jbd_unlock_bh_state(bh);
 				journal_remove_journal_head(bh);
 				__brelse(bh);
+				if (need_resched() && commit_transaction->
+							t_sync_datalist) {
+					commit_transaction->t_sync_datalist =
+								next_jh;
+					if (bufs)
+						break;
+					spin_unlock(&journal->j_list_lock);
+					cond_resched();
+					goto write_out_data;
+				}
 			}
 		}
 		if (bufs == ARRAY_SIZE(wbuf)) {
@@ -284,8 +294,7 @@ write_out_data_locked:
 		cond_resched();
 		journal_brelse_array(wbuf, bufs);
 		spin_lock(&journal->j_list_lock);
-		if (bufs)
-			goto write_out_data_locked;
+		goto write_out_data_locked;
 	}
 
 	/*

_