Commit 697e6fed authored by Jan Kara's avatar Jan Kara Committed by Fengguang Wu

writeback: Remove outdated comment

The comment is hopelessly outdated and misplaced. We no longer have 'bdi'
part of writeback work, the comment about blockdev super is outdated,
comment about throttling as well. Information about list handling is in
more detail at queue_io(). So just move the bit about older_than_this to
close to move_expired_inodes() and remove the rest.
Reviewed-by: default avatarChristoph Hellwig <>
Signed-off-by: default avatarJan Kara <>
Signed-off-by: default avatarFengguang Wu <>
parent f469ec9c
......@@ -256,7 +256,8 @@ static bool inode_dirtied_after(struct inode *inode, unsigned long t)
* Move expired dirty inodes from @delaying_queue to @dispatch_queue.
* Move expired (dirtied after work->older_than_this) dirty inodes from
* @delaying_queue to @dispatch_queue.
static int move_expired_inodes(struct list_head *delaying_queue,
struct list_head *dispatch_queue,
......@@ -1148,23 +1149,6 @@ void __mark_inode_dirty(struct inode *inode, int flags)
* Write out a superblock's list of dirty inodes. A wait will be performed
* upon no inodes, all inodes or the final one, depending upon sync_mode.
* If older_than_this is non-NULL, then only write out inodes which
* had their first dirtying at a time earlier than *older_than_this.
* If `bdi' is non-zero then we're being asked to writeback a specific queue.
* This function assumes that the blockdev superblock's inodes are backed by
* a variety of queues, so all inodes are searched. For other superblocks,
* assume that all inodes are backed by the same queue.
* The inodes to be written are parked on bdi->b_io. They are moved back onto
* bdi->b_dirty as they are selected for writing. This way, none can be missed
* on the writer throttling path, and we get decent balancing between many
* throttled threads: we don't want them all piling up on inode_sync_wait.
static void wait_sb_inodes(struct super_block *sb)
struct inode *inode, *old_inode = NULL;
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment