blk-mq: rework flush sequencing logic

Witch to using a preallocated flush_rq for blk-mq similar to what's done
with the old request path.  This allows us to set up the request properly
with a tag from the actually allowed range and ->rq_disk as needed by
some drivers.  To make life easier we also switch to dynamic allocation
of ->flush_rq for the old path.

This effectively reverts most of

    "blk-mq: fix for flush deadlock"

and

    "blk-mq: Don't reserve a tag for flush request"

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
This commit is contained in:
Christoph Hellwig
2014-02-10 09:29:00 -07:00
committed by Jens Axboe
parent ce2c350b2c
commit 18741986a4
7 changed files with 76 additions and 117 deletions
+3 -8
View File
@@ -101,7 +101,7 @@ struct request {
};
union {
struct call_single_data csd;
struct work_struct mq_flush_data;
struct work_struct mq_flush_work;
};
struct request_queue *q;
@@ -451,13 +451,8 @@ struct request_queue {
unsigned long flush_pending_since;
struct list_head flush_queue[2];
struct list_head flush_data_in_flight;
union {
struct request flush_rq;
struct {
spinlock_t mq_flush_lock;
struct work_struct mq_flush_work;
};
};
struct request *flush_rq;
spinlock_t mq_flush_lock;
struct mutex sysfs_lock;