You've already forked linux-apfs
mirror of
https://github.com/linux-apfs/linux-apfs.git
synced 2026-05-01 15:00:59 -07:00
Merge branch 'for-4.10/block' of git://git.kernel.dk/linux-block
Pull block layer updates from Jens Axboe:
"This is the main block pull request this series. Contrary to previous
release, I've kept the core and driver changes in the same branch. We
always ended up having dependencies between the two for obvious
reasons, so makes more sense to keep them together. That said, I'll
probably try and keep more topical branches going forward, especially
for cycles that end up being as busy as this one.
The major parts of this pull request is:
- Improved support for O_DIRECT on block devices, with a small
private implementation instead of using the pig that is
fs/direct-io.c. From Christoph.
- Request completion tracking in a scalable fashion. This is utilized
by two components in this pull, the new hybrid polling and the
writeback queue throttling code.
- Improved support for polling with O_DIRECT, adding a hybrid mode
that combines pure polling with an initial sleep. From me.
- Support for automatic throttling of writeback queues on the block
side. This uses feedback from the device completion latencies to
scale the queue on the block side up or down. From me.
- Support from SMR drives in the block layer and for SD. From Hannes
and Shaun.
- Multi-connection support for nbd. From Josef.
- Cleanup of request and bio flags, so we have a clear split between
which are bio (or rq) private, and which ones are shared. From
Christoph.
- A set of patches from Bart, that improve how we handle queue
stopping and starting in blk-mq.
- Support for WRITE_ZEROES from Chaitanya.
- Lightnvm updates from Javier/Matias.
- Supoort for FC for the nvme-over-fabrics code. From James Smart.
- A bunch of fixes from a whole slew of people, too many to name
here"
* 'for-4.10/block' of git://git.kernel.dk/linux-block: (182 commits)
blk-stat: fix a few cases of missing batch flushing
blk-flush: run the queue when inserting blk-mq flush
elevator: make the rqhash helpers exported
blk-mq: abstract out blk_mq_dispatch_rq_list() helper
blk-mq: add blk_mq_start_stopped_hw_queue()
block: improve handling of the magic discard payload
blk-wbt: don't throttle discard or write zeroes
nbd: use dev_err_ratelimited in io path
nbd: reset the setup task for NBD_CLEAR_SOCK
nvme-fabrics: Add FC LLDD loopback driver to test FC-NVME
nvme-fabrics: Add target support for FC transport
nvme-fabrics: Add host support for FC transport
nvme-fabrics: Add FC transport LLDD api definitions
nvme-fabrics: Add FC transport FC-NVME definitions
nvme-fabrics: Add FC transport error codes to nvme.h
Add type 0x28 NVME type code to scsi fc headers
nvme-fabrics: patch target code in prep for FC transport support
nvme-fabrics: set sqe.command_id in core not transports
parser: add u64 number parser
nvme-rdma: align to generic ib_event logging helper
...
This commit is contained in:
@@ -235,3 +235,45 @@ Description:
|
||||
write_same_max_bytes is 0, write same is not supported
|
||||
by the device.
|
||||
|
||||
What: /sys/block/<disk>/queue/write_zeroes_max_bytes
|
||||
Date: November 2016
|
||||
Contact: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
|
||||
Description:
|
||||
Devices that support write zeroes operation in which a
|
||||
single request can be issued to zero out the range of
|
||||
contiguous blocks on storage without having any payload
|
||||
in the request. This can be used to optimize writing zeroes
|
||||
to the devices. write_zeroes_max_bytes indicates how many
|
||||
bytes can be written in a single write zeroes command. If
|
||||
write_zeroes_max_bytes is 0, write zeroes is not supported
|
||||
by the device.
|
||||
|
||||
What: /sys/block/<disk>/queue/zoned
|
||||
Date: September 2016
|
||||
Contact: Damien Le Moal <damien.lemoal@hgst.com>
|
||||
Description:
|
||||
zoned indicates if the device is a zoned block device
|
||||
and the zone model of the device if it is indeed zoned.
|
||||
The possible values indicated by zoned are "none" for
|
||||
regular block devices and "host-aware" or "host-managed"
|
||||
for zoned block devices. The characteristics of
|
||||
host-aware and host-managed zoned block devices are
|
||||
described in the ZBC (Zoned Block Commands) and ZAC
|
||||
(Zoned Device ATA Command Set) standards. These standards
|
||||
also define the "drive-managed" zone model. However,
|
||||
since drive-managed zoned block devices do not support
|
||||
zone commands, they will be treated as regular block
|
||||
devices and zoned will report "none".
|
||||
|
||||
What: /sys/block/<disk>/queue/chunk_sectors
|
||||
Date: September 2016
|
||||
Contact: Hannes Reinecke <hare@suse.com>
|
||||
Description:
|
||||
chunk_sectors has different meaning depending on the type
|
||||
of the disk. For a RAID device (dm-raid), chunk_sectors
|
||||
indicates the size in 512B sectors of the RAID volume
|
||||
stripe segment. For a zoned block device, either
|
||||
host-aware or host-managed, chunk_sectors indicates the
|
||||
size of 512B sectors of the zones of the device, with
|
||||
the eventual exception of the last zone of the device
|
||||
which may be smaller.
|
||||
|
||||
@@ -348,7 +348,7 @@ Drivers can now specify a request prepare function (q->prep_rq_fn) that the
|
||||
block layer would invoke to pre-build device commands for a given request,
|
||||
or perform other preparatory processing for the request. This is routine is
|
||||
called by elv_next_request(), i.e. typically just before servicing a request.
|
||||
(The prepare function would not be called for requests that have REQ_DONTPREP
|
||||
(The prepare function would not be called for requests that have RQF_DONTPREP
|
||||
enabled)
|
||||
|
||||
Aside:
|
||||
@@ -553,8 +553,8 @@ struct request {
|
||||
struct request_list *rl;
|
||||
}
|
||||
|
||||
See the rq_flag_bits definitions for an explanation of the various flags
|
||||
available. Some bits are used by the block layer or i/o scheduler.
|
||||
See the req_ops and req_flag_bits definitions for an explanation of the various
|
||||
flags available. Some bits are used by the block layer or i/o scheduler.
|
||||
|
||||
The behaviour of the various sector counts are almost the same as before,
|
||||
except that since we have multi-segment bios, current_nr_sectors refers
|
||||
|
||||
@@ -240,11 +240,11 @@ All cfq queues doing synchronous sequential IO go on to sync-idle tree.
|
||||
On this tree we idle on each queue individually.
|
||||
|
||||
All synchronous non-sequential queues go on sync-noidle tree. Also any
|
||||
request which are marked with REQ_NOIDLE go on this service tree. On this
|
||||
tree we do not idle on individual queues instead idle on the whole group
|
||||
of queues or the tree. So if there are 4 queues waiting for IO to dispatch
|
||||
we will idle only once last queue has dispatched the IO and there is
|
||||
no more IO on this service tree.
|
||||
synchronous write request which is not marked with REQ_IDLE goes on this
|
||||
service tree. On this tree we do not idle on individual queues instead idle
|
||||
on the whole group of queues or the tree. So if there are 4 queues waiting
|
||||
for IO to dispatch we will idle only once last queue has dispatched the IO
|
||||
and there is no more IO on this service tree.
|
||||
|
||||
All async writes go on async service tree. There is no idling on async
|
||||
queues.
|
||||
@@ -257,17 +257,17 @@ tree idling provides isolation with buffered write queues on async tree.
|
||||
|
||||
FAQ
|
||||
===
|
||||
Q1. Why to idle at all on queues marked with REQ_NOIDLE.
|
||||
Q1. Why to idle at all on queues not marked with REQ_IDLE.
|
||||
|
||||
A1. We only do tree idle (all queues on sync-noidle tree) on queues marked
|
||||
with REQ_NOIDLE. This helps in providing isolation with all the sync-idle
|
||||
A1. We only do tree idle (all queues on sync-noidle tree) on queues not marked
|
||||
with REQ_IDLE. This helps in providing isolation with all the sync-idle
|
||||
queues. Otherwise in presence of many sequential readers, other
|
||||
synchronous IO might not get fair share of disk.
|
||||
|
||||
For example, if there are 10 sequential readers doing IO and they get
|
||||
100ms each. If a REQ_NOIDLE request comes in, it will be scheduled
|
||||
roughly after 1 second. If after completion of REQ_NOIDLE request we
|
||||
do not idle, and after a couple of milli seconds a another REQ_NOIDLE
|
||||
100ms each. If a !REQ_IDLE request comes in, it will be scheduled
|
||||
roughly after 1 second. If after completion of !REQ_IDLE request we
|
||||
do not idle, and after a couple of milli seconds a another !REQ_IDLE
|
||||
request comes in, again it will be scheduled after 1second. Repeat it
|
||||
and notice how a workload can lose its disk share and suffer due to
|
||||
multiple sequential readers.
|
||||
@@ -276,16 +276,16 @@ A1. We only do tree idle (all queues on sync-noidle tree) on queues marked
|
||||
context of fsync, and later some journaling data is written. Journaling
|
||||
data comes in only after fsync has finished its IO (atleast for ext4
|
||||
that seemed to be the case). Now if one decides not to idle on fsync
|
||||
thread due to REQ_NOIDLE, then next journaling write will not get
|
||||
thread due to !REQ_IDLE, then next journaling write will not get
|
||||
scheduled for another second. A process doing small fsync, will suffer
|
||||
badly in presence of multiple sequential readers.
|
||||
|
||||
Hence doing tree idling on threads using REQ_NOIDLE flag on requests
|
||||
Hence doing tree idling on threads using !REQ_IDLE flag on requests
|
||||
provides isolation from multiple sequential readers and at the same
|
||||
time we do not idle on individual threads.
|
||||
|
||||
Q2. When to specify REQ_NOIDLE
|
||||
A2. I would think whenever one is doing synchronous write and not expecting
|
||||
Q2. When to specify REQ_IDLE
|
||||
A2. I would think whenever one is doing synchronous write and expecting
|
||||
more writes to be dispatched from same context soon, should be able
|
||||
to specify REQ_NOIDLE on writes and that probably should work well for
|
||||
to specify REQ_IDLE on writes and that probably should work well for
|
||||
most of the cases.
|
||||
|
||||
@@ -72,4 +72,4 @@ use_per_node_hctx=[0/1]: Default: 0
|
||||
queue for each CPU node in the system.
|
||||
|
||||
use_lightnvm=[0/1]: Default: 0
|
||||
Register device with LightNVM. Requires blk-mq to be used.
|
||||
Register device with LightNVM. Requires blk-mq and CONFIG_NVM to be enabled.
|
||||
|
||||
@@ -58,6 +58,20 @@ When read, this file shows the total number of block IO polls and how
|
||||
many returned success. Writing '0' to this file will disable polling
|
||||
for this device. Writing any non-zero value will enable this feature.
|
||||
|
||||
io_poll_delay (RW)
|
||||
------------------
|
||||
If polling is enabled, this controls what kind of polling will be
|
||||
performed. It defaults to -1, which is classic polling. In this mode,
|
||||
the CPU will repeatedly ask for completions without giving up any time.
|
||||
If set to 0, a hybrid polling mode is used, where the kernel will attempt
|
||||
to make an educated guess at when the IO will complete. Based on this
|
||||
guess, the kernel will put the process issuing IO to sleep for an amount
|
||||
of time, before entering a classic poll loop. This mode might be a
|
||||
little slower than pure classic polling, but it will be more efficient.
|
||||
If set to a value larger than 0, the kernel will put the process issuing
|
||||
IO to sleep for this amont of microseconds before entering classic
|
||||
polling.
|
||||
|
||||
iostats (RW)
|
||||
-------------
|
||||
This file is used to control (on/off) the iostats accounting of the
|
||||
@@ -169,5 +183,14 @@ This is the number of bytes the device can write in a single write-same
|
||||
command. A value of '0' means write-same is not supported by this
|
||||
device.
|
||||
|
||||
wb_lat_usec (RW)
|
||||
----------------
|
||||
If the device is registered for writeback throttling, then this file shows
|
||||
the target minimum read latency. If this latency is exceeded in a given
|
||||
window of time (see wb_window_usec), then the writeback throttling will start
|
||||
scaling back writes. Writing a value of '0' to this file disables the
|
||||
feature. Writing a value of '-1' to this file resets the value to the
|
||||
default setting.
|
||||
|
||||
|
||||
Jens Axboe <jens.axboe@oracle.com>, February 2009
|
||||
|
||||
+12
-2
@@ -8766,6 +8766,16 @@ L: linux-nvme@lists.infradead.org
|
||||
S: Supported
|
||||
F: drivers/nvme/target/
|
||||
|
||||
NVM EXPRESS FC TRANSPORT DRIVERS
|
||||
M: James Smart <james.smart@broadcom.com>
|
||||
L: linux-nvme@lists.infradead.org
|
||||
S: Supported
|
||||
F: include/linux/nvme-fc.h
|
||||
F: include/linux/nvme-fc-driver.h
|
||||
F: drivers/nvme/host/fc.c
|
||||
F: drivers/nvme/target/fc.c
|
||||
F: drivers/nvme/target/fcloop.c
|
||||
|
||||
NVMEM FRAMEWORK
|
||||
M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
|
||||
M: Maxime Ripard <maxime.ripard@free-electrons.com>
|
||||
@@ -9656,8 +9666,8 @@ F: arch/mips/boot/dts/pistachio/
|
||||
F: arch/mips/configs/pistachio*_defconfig
|
||||
|
||||
PKTCDVD DRIVER
|
||||
M: Jiri Kosina <jikos@kernel.org>
|
||||
S: Maintained
|
||||
S: Orphan
|
||||
M: linux-block@vger.kernel.org
|
||||
F: drivers/block/pktcdvd.c
|
||||
F: include/linux/pktcdvd.h
|
||||
F: include/uapi/linux/pktcdvd.h
|
||||
|
||||
@@ -25,7 +25,6 @@
|
||||
|
||||
#include <linux/string.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/blk_types.h>
|
||||
#include <asm/byteorder.h>
|
||||
#include <asm/memory.h>
|
||||
#include <asm-generic/pci_iomap.h>
|
||||
|
||||
@@ -22,7 +22,6 @@
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/blk_types.h>
|
||||
|
||||
#include <asm/byteorder.h>
|
||||
#include <asm/barrier.h>
|
||||
|
||||
@@ -5,6 +5,7 @@ menuconfig BLOCK
|
||||
bool "Enable the block layer" if EXPERT
|
||||
default y
|
||||
select SBITMAP
|
||||
select SRCU
|
||||
help
|
||||
Provide block layer support for the kernel.
|
||||
|
||||
@@ -89,6 +90,14 @@ config BLK_DEV_INTEGRITY
|
||||
T10/SCSI Data Integrity Field or the T13/ATA External Path
|
||||
Protection. If in doubt, say N.
|
||||
|
||||
config BLK_DEV_ZONED
|
||||
bool "Zoned block device support"
|
||||
---help---
|
||||
Block layer zoned block device support. This option enables
|
||||
support for ZAC/ZBC host-managed and host-aware zoned block devices.
|
||||
|
||||
Say yes here if you have a ZAC or ZBC storage device.
|
||||
|
||||
config BLK_DEV_THROTTLING
|
||||
bool "Block layer bio throttling support"
|
||||
depends on BLK_CGROUP=y
|
||||
@@ -112,6 +121,32 @@ config BLK_CMDLINE_PARSER
|
||||
|
||||
See Documentation/block/cmdline-partition.txt for more information.
|
||||
|
||||
config BLK_WBT
|
||||
bool "Enable support for block device writeback throttling"
|
||||
default n
|
||||
---help---
|
||||
Enabling this option enables the block layer to throttle buffered
|
||||
background writeback from the VM, making it more smooth and having
|
||||
less impact on foreground operations. The throttling is done
|
||||
dynamically on an algorithm loosely based on CoDel, factoring in
|
||||
the realtime performance of the disk.
|
||||
|
||||
config BLK_WBT_SQ
|
||||
bool "Single queue writeback throttling"
|
||||
default n
|
||||
depends on BLK_WBT
|
||||
---help---
|
||||
Enable writeback throttling by default on legacy single queue devices
|
||||
|
||||
config BLK_WBT_MQ
|
||||
bool "Multiqueue writeback throttling"
|
||||
default y
|
||||
depends on BLK_WBT
|
||||
---help---
|
||||
Enable writeback throttling by default on multiqueue devices.
|
||||
Multiqueue currently doesn't have support for IO scheduling,
|
||||
enabling this option is recommended.
|
||||
|
||||
menu "Partition Types"
|
||||
|
||||
source "block/partitions/Kconfig"
|
||||
|
||||
+3
-1
@@ -5,7 +5,7 @@
|
||||
obj-$(CONFIG_BLOCK) := bio.o elevator.o blk-core.o blk-tag.o blk-sysfs.o \
|
||||
blk-flush.o blk-settings.o blk-ioc.o blk-map.o \
|
||||
blk-exec.o blk-merge.o blk-softirq.o blk-timeout.o \
|
||||
blk-lib.o blk-mq.o blk-mq-tag.o \
|
||||
blk-lib.o blk-mq.o blk-mq-tag.o blk-stat.o \
|
||||
blk-mq-sysfs.o blk-mq-cpumap.o ioctl.o \
|
||||
genhd.o scsi_ioctl.o partition-generic.o ioprio.o \
|
||||
badblocks.o partitions/
|
||||
@@ -23,3 +23,5 @@ obj-$(CONFIG_BLOCK_COMPAT) += compat_ioctl.o
|
||||
obj-$(CONFIG_BLK_CMDLINE_PARSER) += cmdline-parser.o
|
||||
obj-$(CONFIG_BLK_DEV_INTEGRITY) += bio-integrity.o blk-integrity.o t10-pi.o
|
||||
obj-$(CONFIG_BLK_MQ_PCI) += blk-mq-pci.o
|
||||
obj-$(CONFIG_BLK_DEV_ZONED) += blk-zoned.o
|
||||
obj-$(CONFIG_BLK_WBT) += blk-wbt.o
|
||||
|
||||
@@ -172,7 +172,7 @@ bool bio_integrity_enabled(struct bio *bio)
|
||||
{
|
||||
struct blk_integrity *bi = bdev_get_integrity(bio->bi_bdev);
|
||||
|
||||
if (!bio_is_rw(bio))
|
||||
if (bio_op(bio) != REQ_OP_READ && bio_op(bio) != REQ_OP_WRITE)
|
||||
return false;
|
||||
|
||||
/* Already protected? */
|
||||
|
||||
+57
-11
@@ -270,11 +270,15 @@ static void bio_free(struct bio *bio)
|
||||
}
|
||||
}
|
||||
|
||||
void bio_init(struct bio *bio)
|
||||
void bio_init(struct bio *bio, struct bio_vec *table,
|
||||
unsigned short max_vecs)
|
||||
{
|
||||
memset(bio, 0, sizeof(*bio));
|
||||
atomic_set(&bio->__bi_remaining, 1);
|
||||
atomic_set(&bio->__bi_cnt, 1);
|
||||
|
||||
bio->bi_io_vec = table;
|
||||
bio->bi_max_vecs = max_vecs;
|
||||
}
|
||||
EXPORT_SYMBOL(bio_init);
|
||||
|
||||
@@ -480,7 +484,7 @@ struct bio *bio_alloc_bioset(gfp_t gfp_mask, int nr_iovecs, struct bio_set *bs)
|
||||
return NULL;
|
||||
|
||||
bio = p + front_pad;
|
||||
bio_init(bio);
|
||||
bio_init(bio, NULL, 0);
|
||||
|
||||
if (nr_iovecs > inline_vecs) {
|
||||
unsigned long idx = 0;
|
||||
@@ -670,6 +674,7 @@ struct bio *bio_clone_bioset(struct bio *bio_src, gfp_t gfp_mask,
|
||||
switch (bio_op(bio)) {
|
||||
case REQ_OP_DISCARD:
|
||||
case REQ_OP_SECURE_ERASE:
|
||||
case REQ_OP_WRITE_ZEROES:
|
||||
break;
|
||||
case REQ_OP_WRITE_SAME:
|
||||
bio->bi_io_vec[bio->bi_vcnt++] = bio_src->bi_io_vec[0];
|
||||
@@ -847,6 +852,55 @@ done:
|
||||
}
|
||||
EXPORT_SYMBOL(bio_add_page);
|
||||
|
||||
/**
|
||||
* bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio
|
||||
* @bio: bio to add pages to
|
||||
* @iter: iov iterator describing the region to be mapped
|
||||
*
|
||||
* Pins as many pages from *iter and appends them to @bio's bvec array. The
|
||||
* pages will have to be released using put_page() when done.
|
||||
*/
|
||||
int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
|
||||
{
|
||||
unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
|
||||
struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
|
||||
struct page **pages = (struct page **)bv;
|
||||
size_t offset, diff;
|
||||
ssize_t size;
|
||||
|
||||
size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
|
||||
if (unlikely(size <= 0))
|
||||
return size ? size : -EFAULT;
|
||||
nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE;
|
||||
|
||||
/*
|
||||
* Deep magic below: We need to walk the pinned pages backwards
|
||||
* because we are abusing the space allocated for the bio_vecs
|
||||
* for the page array. Because the bio_vecs are larger than the
|
||||
* page pointers by definition this will always work. But it also
|
||||
* means we can't use bio_add_page, so any changes to it's semantics
|
||||
* need to be reflected here as well.
|
||||
*/
|
||||
bio->bi_iter.bi_size += size;
|
||||
bio->bi_vcnt += nr_pages;
|
||||
|
||||
diff = (nr_pages * PAGE_SIZE - offset) - size;
|
||||
while (nr_pages--) {
|
||||
bv[nr_pages].bv_page = pages[nr_pages];
|
||||
bv[nr_pages].bv_len = PAGE_SIZE;
|
||||
bv[nr_pages].bv_offset = 0;
|
||||
}
|
||||
|
||||
bv[0].bv_offset += offset;
|
||||
bv[0].bv_len -= offset;
|
||||
if (diff)
|
||||
bv[bio->bi_vcnt - 1].bv_len -= diff;
|
||||
|
||||
iov_iter_advance(iter, size);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(bio_iov_iter_get_pages);
|
||||
|
||||
struct submit_bio_ret {
|
||||
struct completion event;
|
||||
int error;
|
||||
@@ -1786,15 +1840,7 @@ struct bio *bio_split(struct bio *bio, int sectors,
|
||||
BUG_ON(sectors <= 0);
|
||||
BUG_ON(sectors >= bio_sectors(bio));
|
||||
|
||||
/*
|
||||
* Discards need a mutable bio_vec to accommodate the payload
|
||||
* required by the DSM TRIM and UNMAP commands.
|
||||
*/
|
||||
if (bio_op(bio) == REQ_OP_DISCARD || bio_op(bio) == REQ_OP_SECURE_ERASE)
|
||||
split = bio_clone_bioset(bio, gfp, bs);
|
||||
else
|
||||
split = bio_clone_fast(bio, gfp, bs);
|
||||
|
||||
split = bio_clone_fast(bio, gfp, bs);
|
||||
if (!split)
|
||||
return NULL;
|
||||
|
||||
|
||||
+5
-4
@@ -185,7 +185,8 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
|
||||
}
|
||||
|
||||
wb_congested = wb_congested_get_create(&q->backing_dev_info,
|
||||
blkcg->css.id, GFP_NOWAIT);
|
||||
blkcg->css.id,
|
||||
GFP_NOWAIT | __GFP_NOWARN);
|
||||
if (!wb_congested) {
|
||||
ret = -ENOMEM;
|
||||
goto err_put_css;
|
||||
@@ -193,7 +194,7 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
|
||||
|
||||
/* allocate */
|
||||
if (!new_blkg) {
|
||||
new_blkg = blkg_alloc(blkcg, q, GFP_NOWAIT);
|
||||
new_blkg = blkg_alloc(blkcg, q, GFP_NOWAIT | __GFP_NOWARN);
|
||||
if (unlikely(!new_blkg)) {
|
||||
ret = -ENOMEM;
|
||||
goto err_put_congested;
|
||||
@@ -1022,7 +1023,7 @@ blkcg_css_alloc(struct cgroup_subsys_state *parent_css)
|
||||
}
|
||||
|
||||
spin_lock_init(&blkcg->lock);
|
||||
INIT_RADIX_TREE(&blkcg->blkg_tree, GFP_NOWAIT);
|
||||
INIT_RADIX_TREE(&blkcg->blkg_tree, GFP_NOWAIT | __GFP_NOWARN);
|
||||
INIT_HLIST_HEAD(&blkcg->blkg_list);
|
||||
#ifdef CONFIG_CGROUP_WRITEBACK
|
||||
INIT_LIST_HEAD(&blkcg->cgwb_list);
|
||||
@@ -1240,7 +1241,7 @@ pd_prealloc:
|
||||
if (blkg->pd[pol->plid])
|
||||
continue;
|
||||
|
||||
pd = pol->pd_alloc_fn(GFP_NOWAIT, q->node);
|
||||
pd = pol->pd_alloc_fn(GFP_NOWAIT | __GFP_NOWARN, q->node);
|
||||
if (!pd)
|
||||
swap(pd, pd_prealloc);
|
||||
if (!pd) {
|
||||
|
||||
+105
-156
File diff suppressed because it is too large
Load Diff
+1
-1
@@ -72,7 +72,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
|
||||
spin_lock_irq(q->queue_lock);
|
||||
|
||||
if (unlikely(blk_queue_dying(q))) {
|
||||
rq->cmd_flags |= REQ_QUIET;
|
||||
rq->rq_flags |= RQF_QUIET;
|
||||
rq->errors = -ENXIO;
|
||||
__blk_end_request_all(rq, rq->errors);
|
||||
spin_unlock_irq(q->queue_lock);
|
||||
|
||||
+16
-11
@@ -56,7 +56,7 @@
|
||||
* Once while executing DATA and again after the whole sequence is
|
||||
* complete. The first completion updates the contained bio but doesn't
|
||||
* finish it so that the bio submitter is notified only after the whole
|
||||
* sequence is complete. This is implemented by testing REQ_FLUSH_SEQ in
|
||||
* sequence is complete. This is implemented by testing RQF_FLUSH_SEQ in
|
||||
* req_bio_endio().
|
||||
*
|
||||
* The above peculiarity requires that each FLUSH/FUA request has only one
|
||||
@@ -127,17 +127,14 @@ static void blk_flush_restore_request(struct request *rq)
|
||||
rq->bio = rq->biotail;
|
||||
|
||||
/* make @rq a normal request */
|
||||
rq->cmd_flags &= ~REQ_FLUSH_SEQ;
|
||||
rq->rq_flags &= ~RQF_FLUSH_SEQ;
|
||||
rq->end_io = rq->flush.saved_end_io;
|
||||
}
|
||||
|
||||
static bool blk_flush_queue_rq(struct request *rq, bool add_front)
|
||||
{
|
||||
if (rq->q->mq_ops) {
|
||||
struct request_queue *q = rq->q;
|
||||
|
||||
blk_mq_add_to_requeue_list(rq, add_front);
|
||||
blk_mq_kick_requeue_list(q);
|
||||
blk_mq_add_to_requeue_list(rq, add_front, true);
|
||||
return false;
|
||||
} else {
|
||||
if (add_front)
|
||||
@@ -330,7 +327,8 @@ static bool blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq)
|
||||
}
|
||||
|
||||
flush_rq->cmd_type = REQ_TYPE_FS;
|
||||
req_set_op_attrs(flush_rq, REQ_OP_FLUSH, WRITE_FLUSH | REQ_FLUSH_SEQ);
|
||||
flush_rq->cmd_flags = REQ_OP_FLUSH | REQ_PREFLUSH;
|
||||
flush_rq->rq_flags |= RQF_FLUSH_SEQ;
|
||||
flush_rq->rq_disk = first_rq->rq_disk;
|
||||
flush_rq->end_io = flush_end_io;
|
||||
|
||||
@@ -368,7 +366,7 @@ static void flush_data_end_io(struct request *rq, int error)
|
||||
elv_completed_request(q, rq);
|
||||
|
||||
/* for avoiding double accounting */
|
||||
rq->cmd_flags &= ~REQ_STARTED;
|
||||
rq->rq_flags &= ~RQF_STARTED;
|
||||
|
||||
/*
|
||||
* After populating an empty queue, kick it to avoid stall. Read
|
||||
@@ -425,6 +423,13 @@ void blk_insert_flush(struct request *rq)
|
||||
if (!(fflags & (1UL << QUEUE_FLAG_FUA)))
|
||||
rq->cmd_flags &= ~REQ_FUA;
|
||||
|
||||
/*
|
||||
* REQ_PREFLUSH|REQ_FUA implies REQ_SYNC, so if we clear any
|
||||
* of those flags, we have to set REQ_SYNC to avoid skewing
|
||||
* the request accounting.
|
||||
*/
|
||||
rq->cmd_flags |= REQ_SYNC;
|
||||
|
||||
/*
|
||||
* An empty flush handed down from a stacking driver may
|
||||
* translate into nothing if the underlying device does not
|
||||
@@ -449,7 +454,7 @@ void blk_insert_flush(struct request *rq)
|
||||
if ((policy & REQ_FSEQ_DATA) &&
|
||||
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
|
||||
if (q->mq_ops) {
|
||||
blk_mq_insert_request(rq, false, false, true);
|
||||
blk_mq_insert_request(rq, false, true, false);
|
||||
} else
|
||||
list_add_tail(&rq->queuelist, &q->queue_head);
|
||||
return;
|
||||
@@ -461,7 +466,7 @@ void blk_insert_flush(struct request *rq)
|
||||
*/
|
||||
memset(&rq->flush, 0, sizeof(rq->flush));
|
||||
INIT_LIST_HEAD(&rq->flush.list);
|
||||
rq->cmd_flags |= REQ_FLUSH_SEQ;
|
||||
rq->rq_flags |= RQF_FLUSH_SEQ;
|
||||
rq->flush.saved_end_io = rq->end_io; /* Usually NULL */
|
||||
if (q->mq_ops) {
|
||||
rq->end_io = mq_flush_data_end_io;
|
||||
@@ -513,7 +518,7 @@ int blkdev_issue_flush(struct block_device *bdev, gfp_t gfp_mask,
|
||||
|
||||
bio = bio_alloc(gfp_mask, 0);
|
||||
bio->bi_bdev = bdev;
|
||||
bio_set_op_attrs(bio, REQ_OP_WRITE, WRITE_FLUSH);
|
||||
bio->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
|
||||
|
||||
ret = submit_bio_wait(bio);
|
||||
|
||||
|
||||
+139
-38
@@ -29,7 +29,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
|
||||
struct request_queue *q = bdev_get_queue(bdev);
|
||||
struct bio *bio = *biop;
|
||||
unsigned int granularity;
|
||||
enum req_op op;
|
||||
unsigned int op;
|
||||
int alignment;
|
||||
sector_t bs_mask;
|
||||
|
||||
@@ -80,7 +80,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
|
||||
req_sects = end_sect - sector;
|
||||
}
|
||||
|
||||
bio = next_bio(bio, 1, gfp_mask);
|
||||
bio = next_bio(bio, 0, gfp_mask);
|
||||
bio->bi_iter.bi_sector = sector;
|
||||
bio->bi_bdev = bdev;
|
||||
bio_set_op_attrs(bio, op, 0);
|
||||
@@ -137,24 +137,24 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector,
|
||||
EXPORT_SYMBOL(blkdev_issue_discard);
|
||||
|
||||
/**
|
||||
* blkdev_issue_write_same - queue a write same operation
|
||||
* __blkdev_issue_write_same - generate number of bios with same page
|
||||
* @bdev: target blockdev
|
||||
* @sector: start sector
|
||||
* @nr_sects: number of sectors to write
|
||||
* @gfp_mask: memory allocation flags (for bio_alloc)
|
||||
* @page: page containing data to write
|
||||
* @biop: pointer to anchor bio
|
||||
*
|
||||
* Description:
|
||||
* Issue a write same request for the sectors in question.
|
||||
* Generate and issue number of bios(REQ_OP_WRITE_SAME) with same page.
|
||||
*/
|
||||
int blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
|
||||
sector_t nr_sects, gfp_t gfp_mask,
|
||||
struct page *page)
|
||||
static int __blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
|
||||
sector_t nr_sects, gfp_t gfp_mask, struct page *page,
|
||||
struct bio **biop)
|
||||
{
|
||||
struct request_queue *q = bdev_get_queue(bdev);
|
||||
unsigned int max_write_same_sectors;
|
||||
struct bio *bio = NULL;
|
||||
int ret = 0;
|
||||
struct bio *bio = *biop;
|
||||
sector_t bs_mask;
|
||||
|
||||
if (!q)
|
||||
@@ -164,6 +164,9 @@ int blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
|
||||
if ((sector | nr_sects) & bs_mask)
|
||||
return -EINVAL;
|
||||
|
||||
if (!bdev_write_same(bdev))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* Ensure that max_write_same_sectors doesn't overflow bi_size */
|
||||
max_write_same_sectors = UINT_MAX >> 9;
|
||||
|
||||
@@ -185,32 +188,112 @@ int blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
|
||||
bio->bi_iter.bi_size = nr_sects << 9;
|
||||
nr_sects = 0;
|
||||
}
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
if (bio) {
|
||||
*biop = bio;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* blkdev_issue_write_same - queue a write same operation
|
||||
* @bdev: target blockdev
|
||||
* @sector: start sector
|
||||
* @nr_sects: number of sectors to write
|
||||
* @gfp_mask: memory allocation flags (for bio_alloc)
|
||||
* @page: page containing data
|
||||
*
|
||||
* Description:
|
||||
* Issue a write same request for the sectors in question.
|
||||
*/
|
||||
int blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
|
||||
sector_t nr_sects, gfp_t gfp_mask,
|
||||
struct page *page)
|
||||
{
|
||||
struct bio *bio = NULL;
|
||||
struct blk_plug plug;
|
||||
int ret;
|
||||
|
||||
blk_start_plug(&plug);
|
||||
ret = __blkdev_issue_write_same(bdev, sector, nr_sects, gfp_mask, page,
|
||||
&bio);
|
||||
if (ret == 0 && bio) {
|
||||
ret = submit_bio_wait(bio);
|
||||
bio_put(bio);
|
||||
}
|
||||
blk_finish_plug(&plug);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(blkdev_issue_write_same);
|
||||
|
||||
/**
|
||||
* blkdev_issue_zeroout - generate number of zero filed write bios
|
||||
* __blkdev_issue_write_zeroes - generate number of bios with WRITE ZEROES
|
||||
* @bdev: blockdev to issue
|
||||
* @sector: start sector
|
||||
* @nr_sects: number of sectors to write
|
||||
* @gfp_mask: memory allocation flags (for bio_alloc)
|
||||
* @biop: pointer to anchor bio
|
||||
*
|
||||
* Description:
|
||||
* Generate and issue number of bios(REQ_OP_WRITE_ZEROES) with zerofiled pages.
|
||||
*/
|
||||
static int __blkdev_issue_write_zeroes(struct block_device *bdev,
|
||||
sector_t sector, sector_t nr_sects, gfp_t gfp_mask,
|
||||
struct bio **biop)
|
||||
{
|
||||
struct bio *bio = *biop;
|
||||
unsigned int max_write_zeroes_sectors;
|
||||
struct request_queue *q = bdev_get_queue(bdev);
|
||||
|
||||
if (!q)
|
||||
return -ENXIO;
|
||||
|
||||
/* Ensure that max_write_zeroes_sectors doesn't overflow bi_size */
|
||||
max_write_zeroes_sectors = bdev_write_zeroes_sectors(bdev);
|
||||
|
||||
if (max_write_zeroes_sectors == 0)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
while (nr_sects) {
|
||||
bio = next_bio(bio, 0, gfp_mask);
|
||||
bio->bi_iter.bi_sector = sector;
|
||||
bio->bi_bdev = bdev;
|
||||
bio_set_op_attrs(bio, REQ_OP_WRITE_ZEROES, 0);
|
||||
|
||||
if (nr_sects > max_write_zeroes_sectors) {
|
||||
bio->bi_iter.bi_size = max_write_zeroes_sectors << 9;
|
||||
nr_sects -= max_write_zeroes_sectors;
|
||||
sector += max_write_zeroes_sectors;
|
||||
} else {
|
||||
bio->bi_iter.bi_size = nr_sects << 9;
|
||||
nr_sects = 0;
|
||||
}
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
*biop = bio;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* __blkdev_issue_zeroout - generate number of zero filed write bios
|
||||
* @bdev: blockdev to issue
|
||||
* @sector: start sector
|
||||
* @nr_sects: number of sectors to write
|
||||
* @gfp_mask: memory allocation flags (for bio_alloc)
|
||||
* @biop: pointer to anchor bio
|
||||
* @discard: discard flag
|
||||
*
|
||||
* Description:
|
||||
* Generate and issue number of bios with zerofiled pages.
|
||||
*/
|
||||
|
||||
static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
|
||||
sector_t nr_sects, gfp_t gfp_mask)
|
||||
int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
|
||||
sector_t nr_sects, gfp_t gfp_mask, struct bio **biop,
|
||||
bool discard)
|
||||
{
|
||||
int ret;
|
||||
struct bio *bio = NULL;
|
||||
int bi_size = 0;
|
||||
struct bio *bio = *biop;
|
||||
unsigned int sz;
|
||||
sector_t bs_mask;
|
||||
|
||||
@@ -218,6 +301,24 @@ static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
|
||||
if ((sector | nr_sects) & bs_mask)
|
||||
return -EINVAL;
|
||||
|
||||
if (discard) {
|
||||
ret = __blkdev_issue_discard(bdev, sector, nr_sects, gfp_mask,
|
||||
BLKDEV_DISCARD_ZERO, biop);
|
||||
if (ret == 0 || (ret && ret != -EOPNOTSUPP))
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = __blkdev_issue_write_zeroes(bdev, sector, nr_sects, gfp_mask,
|
||||
biop);
|
||||
if (ret == 0 || (ret && ret != -EOPNOTSUPP))
|
||||
goto out;
|
||||
|
||||
ret = __blkdev_issue_write_same(bdev, sector, nr_sects, gfp_mask,
|
||||
ZERO_PAGE(0), biop);
|
||||
if (ret == 0 || (ret && ret != -EOPNOTSUPP))
|
||||
goto out;
|
||||
|
||||
ret = 0;
|
||||
while (nr_sects != 0) {
|
||||
bio = next_bio(bio, min(nr_sects, (sector_t)BIO_MAX_PAGES),
|
||||
gfp_mask);
|
||||
@@ -227,21 +328,20 @@ static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
|
||||
|
||||
while (nr_sects != 0) {
|
||||
sz = min((sector_t) PAGE_SIZE >> 9 , nr_sects);
|
||||
ret = bio_add_page(bio, ZERO_PAGE(0), sz << 9, 0);
|
||||
nr_sects -= ret >> 9;
|
||||
sector += ret >> 9;
|
||||
if (ret < (sz << 9))
|
||||
bi_size = bio_add_page(bio, ZERO_PAGE(0), sz << 9, 0);
|
||||
nr_sects -= bi_size >> 9;
|
||||
sector += bi_size >> 9;
|
||||
if (bi_size < (sz << 9))
|
||||
break;
|
||||
}
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
if (bio) {
|
||||
ret = submit_bio_wait(bio);
|
||||
bio_put(bio);
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
*biop = bio;
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(__blkdev_issue_zeroout);
|
||||
|
||||
/**
|
||||
* blkdev_issue_zeroout - zero-fill a block range
|
||||
@@ -258,26 +358,27 @@ static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
|
||||
* the discard request fail, if the discard flag is not set, or if
|
||||
* discard_zeroes_data is not supported, this function will resort to
|
||||
* zeroing the blocks manually, thus provisioning (allocating,
|
||||
* anchoring) them. If the block device supports the WRITE SAME command
|
||||
* blkdev_issue_zeroout() will use it to optimize the process of
|
||||
* anchoring) them. If the block device supports WRITE ZEROES or WRITE SAME
|
||||
* command(s), blkdev_issue_zeroout() will use it to optimize the process of
|
||||
* clearing the block range. Otherwise the zeroing will be performed
|
||||
* using regular WRITE calls.
|
||||
*/
|
||||
|
||||
int blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
|
||||
sector_t nr_sects, gfp_t gfp_mask, bool discard)
|
||||
{
|
||||
if (discard) {
|
||||
if (!blkdev_issue_discard(bdev, sector, nr_sects, gfp_mask,
|
||||
BLKDEV_DISCARD_ZERO))
|
||||
return 0;
|
||||
int ret;
|
||||
struct bio *bio = NULL;
|
||||
struct blk_plug plug;
|
||||
|
||||
blk_start_plug(&plug);
|
||||
ret = __blkdev_issue_zeroout(bdev, sector, nr_sects, gfp_mask,
|
||||
&bio, discard);
|
||||
if (ret == 0 && bio) {
|
||||
ret = submit_bio_wait(bio);
|
||||
bio_put(bio);
|
||||
}
|
||||
blk_finish_plug(&plug);
|
||||
|
||||
if (bdev_write_same(bdev) &&
|
||||
blkdev_issue_write_same(bdev, sector, nr_sects, gfp_mask,
|
||||
ZERO_PAGE(0)) == 0)
|
||||
return 0;
|
||||
|
||||
return __blkdev_issue_zeroout(bdev, sector, nr_sects, gfp_mask);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(blkdev_issue_zeroout);
|
||||
|
||||
+4
-2
@@ -16,6 +16,8 @@
|
||||
int blk_rq_append_bio(struct request *rq, struct bio *bio)
|
||||
{
|
||||
if (!rq->bio) {
|
||||
rq->cmd_flags &= REQ_OP_MASK;
|
||||
rq->cmd_flags |= (bio->bi_opf & REQ_OP_MASK);
|
||||
blk_rq_bio_prep(rq->q, rq, bio);
|
||||
} else {
|
||||
if (!ll_back_merge_fn(rq->q, rq, bio))
|
||||
@@ -138,7 +140,7 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
|
||||
} while (iov_iter_count(&i));
|
||||
|
||||
if (!bio_flagged(bio, BIO_USER_MAPPED))
|
||||
rq->cmd_flags |= REQ_COPY_USER;
|
||||
rq->rq_flags |= RQF_COPY_USER;
|
||||
return 0;
|
||||
|
||||
unmap_rq:
|
||||
@@ -236,7 +238,7 @@ int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
|
||||
bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
|
||||
|
||||
if (do_copy)
|
||||
rq->cmd_flags |= REQ_COPY_USER;
|
||||
rq->rq_flags |= RQF_COPY_USER;
|
||||
|
||||
ret = blk_rq_append_bio(rq, bio);
|
||||
if (unlikely(ret)) {
|
||||
|
||||
+40
-49
@@ -199,6 +199,10 @@ void blk_queue_split(struct request_queue *q, struct bio **bio,
|
||||
case REQ_OP_SECURE_ERASE:
|
||||
split = blk_bio_discard_split(q, *bio, bs, &nsegs);
|
||||
break;
|
||||
case REQ_OP_WRITE_ZEROES:
|
||||
split = NULL;
|
||||
nsegs = (*bio)->bi_phys_segments;
|
||||
break;
|
||||
case REQ_OP_WRITE_SAME:
|
||||
split = blk_bio_write_same_split(q, *bio, bs, &nsegs);
|
||||
break;
|
||||
@@ -237,15 +241,14 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
|
||||
if (!bio)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* This should probably be returning 0, but blk_add_request_payload()
|
||||
* (Christoph!!!!)
|
||||
*/
|
||||
if (bio_op(bio) == REQ_OP_DISCARD || bio_op(bio) == REQ_OP_SECURE_ERASE)
|
||||
return 1;
|
||||
|
||||
if (bio_op(bio) == REQ_OP_WRITE_SAME)
|
||||
switch (bio_op(bio)) {
|
||||
case REQ_OP_DISCARD:
|
||||
case REQ_OP_SECURE_ERASE:
|
||||
case REQ_OP_WRITE_ZEROES:
|
||||
return 0;
|
||||
case REQ_OP_WRITE_SAME:
|
||||
return 1;
|
||||
}
|
||||
|
||||
fbio = bio;
|
||||
cluster = blk_queue_cluster(q);
|
||||
@@ -402,38 +405,21 @@ new_segment:
|
||||
*bvprv = *bvec;
|
||||
}
|
||||
|
||||
static inline int __blk_bvec_map_sg(struct request_queue *q, struct bio_vec bv,
|
||||
struct scatterlist *sglist, struct scatterlist **sg)
|
||||
{
|
||||
*sg = sglist;
|
||||
sg_set_page(*sg, bv.bv_page, bv.bv_len, bv.bv_offset);
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio,
|
||||
struct scatterlist *sglist,
|
||||
struct scatterlist **sg)
|
||||
{
|
||||
struct bio_vec bvec, bvprv = { NULL };
|
||||
struct bvec_iter iter;
|
||||
int nsegs, cluster;
|
||||
|
||||
nsegs = 0;
|
||||
cluster = blk_queue_cluster(q);
|
||||
|
||||
switch (bio_op(bio)) {
|
||||
case REQ_OP_DISCARD:
|
||||
case REQ_OP_SECURE_ERASE:
|
||||
/*
|
||||
* This is a hack - drivers should be neither modifying the
|
||||
* biovec, nor relying on bi_vcnt - but because of
|
||||
* blk_add_request_payload(), a discard bio may or may not have
|
||||
* a payload we need to set up here (thank you Christoph) and
|
||||
* bi_vcnt is really the only way of telling if we need to.
|
||||
*/
|
||||
if (!bio->bi_vcnt)
|
||||
return 0;
|
||||
/* Fall through */
|
||||
case REQ_OP_WRITE_SAME:
|
||||
*sg = sglist;
|
||||
bvec = bio_iovec(bio);
|
||||
sg_set_page(*sg, bvec.bv_page, bvec.bv_len, bvec.bv_offset);
|
||||
return 1;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
int cluster = blk_queue_cluster(q), nsegs = 0;
|
||||
|
||||
for_each_bio(bio)
|
||||
bio_for_each_segment(bvec, bio, iter)
|
||||
@@ -453,10 +439,14 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq,
|
||||
struct scatterlist *sg = NULL;
|
||||
int nsegs = 0;
|
||||
|
||||
if (rq->bio)
|
||||
if (rq->rq_flags & RQF_SPECIAL_PAYLOAD)
|
||||
nsegs = __blk_bvec_map_sg(q, rq->special_vec, sglist, &sg);
|
||||
else if (rq->bio && bio_op(rq->bio) == REQ_OP_WRITE_SAME)
|
||||
nsegs = __blk_bvec_map_sg(q, bio_iovec(rq->bio), sglist, &sg);
|
||||
else if (rq->bio)
|
||||
nsegs = __blk_bios_map_sg(q, rq->bio, sglist, &sg);
|
||||
|
||||
if (unlikely(rq->cmd_flags & REQ_COPY_USER) &&
|
||||
if (unlikely(rq->rq_flags & RQF_COPY_USER) &&
|
||||
(blk_rq_bytes(rq) & q->dma_pad_mask)) {
|
||||
unsigned int pad_len =
|
||||
(q->dma_pad_mask & ~blk_rq_bytes(rq)) + 1;
|
||||
@@ -486,12 +476,19 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq,
|
||||
* Something must have been wrong if the figured number of
|
||||
* segment is bigger than number of req's physical segments
|
||||
*/
|
||||
WARN_ON(nsegs > rq->nr_phys_segments);
|
||||
WARN_ON(nsegs > blk_rq_nr_phys_segments(rq));
|
||||
|
||||
return nsegs;
|
||||
}
|
||||
EXPORT_SYMBOL(blk_rq_map_sg);
|
||||
|
||||
static void req_set_nomerge(struct request_queue *q, struct request *req)
|
||||
{
|
||||
req->cmd_flags |= REQ_NOMERGE;
|
||||
if (req == q->last_merge)
|
||||
q->last_merge = NULL;
|
||||
}
|
||||
|
||||
static inline int ll_new_hw_segment(struct request_queue *q,
|
||||
struct request *req,
|
||||
struct bio *bio)
|
||||
@@ -512,9 +509,7 @@ static inline int ll_new_hw_segment(struct request_queue *q,
|
||||
return 1;
|
||||
|
||||
no_merge:
|
||||
req->cmd_flags |= REQ_NOMERGE;
|
||||
if (req == q->last_merge)
|
||||
q->last_merge = NULL;
|
||||
req_set_nomerge(q, req);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -528,9 +523,7 @@ int ll_back_merge_fn(struct request_queue *q, struct request *req,
|
||||
return 0;
|
||||
if (blk_rq_sectors(req) + bio_sectors(bio) >
|
||||
blk_rq_get_max_sectors(req, blk_rq_pos(req))) {
|
||||
req->cmd_flags |= REQ_NOMERGE;
|
||||
if (req == q->last_merge)
|
||||
q->last_merge = NULL;
|
||||
req_set_nomerge(q, req);
|
||||
return 0;
|
||||
}
|
||||
if (!bio_flagged(req->biotail, BIO_SEG_VALID))
|
||||
@@ -552,9 +545,7 @@ int ll_front_merge_fn(struct request_queue *q, struct request *req,
|
||||
return 0;
|
||||
if (blk_rq_sectors(req) + bio_sectors(bio) >
|
||||
blk_rq_get_max_sectors(req, bio->bi_iter.bi_sector)) {
|
||||
req->cmd_flags |= REQ_NOMERGE;
|
||||
if (req == q->last_merge)
|
||||
q->last_merge = NULL;
|
||||
req_set_nomerge(q, req);
|
||||
return 0;
|
||||
}
|
||||
if (!bio_flagged(bio, BIO_SEG_VALID))
|
||||
@@ -634,7 +625,7 @@ void blk_rq_set_mixed_merge(struct request *rq)
|
||||
unsigned int ff = rq->cmd_flags & REQ_FAILFAST_MASK;
|
||||
struct bio *bio;
|
||||
|
||||
if (rq->cmd_flags & REQ_MIXED_MERGE)
|
||||
if (rq->rq_flags & RQF_MIXED_MERGE)
|
||||
return;
|
||||
|
||||
/*
|
||||
@@ -647,7 +638,7 @@ void blk_rq_set_mixed_merge(struct request *rq)
|
||||
(bio->bi_opf & REQ_FAILFAST_MASK) != ff);
|
||||
bio->bi_opf |= ff;
|
||||
}
|
||||
rq->cmd_flags |= REQ_MIXED_MERGE;
|
||||
rq->rq_flags |= RQF_MIXED_MERGE;
|
||||
}
|
||||
|
||||
static void blk_account_io_merge(struct request *req)
|
||||
@@ -709,7 +700,7 @@ static int attempt_merge(struct request_queue *q, struct request *req,
|
||||
* makes sure that all involved bios have mixable attributes
|
||||
* set properly.
|
||||
*/
|
||||
if ((req->cmd_flags | next->cmd_flags) & REQ_MIXED_MERGE ||
|
||||
if (((req->rq_flags | next->rq_flags) & RQF_MIXED_MERGE) ||
|
||||
(req->cmd_flags & REQ_FAILFAST_MASK) !=
|
||||
(next->cmd_flags & REQ_FAILFAST_MASK)) {
|
||||
blk_rq_set_mixed_merge(req);
|
||||
|
||||
@@ -259,6 +259,47 @@ static ssize_t blk_mq_hw_sysfs_cpus_show(struct blk_mq_hw_ctx *hctx, char *page)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void blk_mq_stat_clear(struct blk_mq_hw_ctx *hctx)
|
||||
{
|
||||
struct blk_mq_ctx *ctx;
|
||||
unsigned int i;
|
||||
|
||||
hctx_for_each_ctx(hctx, ctx, i) {
|
||||
blk_stat_init(&ctx->stat[BLK_STAT_READ]);
|
||||
blk_stat_init(&ctx->stat[BLK_STAT_WRITE]);
|
||||
}
|
||||
}
|
||||
|
||||
static ssize_t blk_mq_hw_sysfs_stat_store(struct blk_mq_hw_ctx *hctx,
|
||||
const char *page, size_t count)
|
||||
{
|
||||
blk_mq_stat_clear(hctx);
|
||||
return count;
|
||||
}
|
||||
|
||||
static ssize_t print_stat(char *page, struct blk_rq_stat *stat, const char *pre)
|
||||
{
|
||||
return sprintf(page, "%s samples=%llu, mean=%lld, min=%lld, max=%lld\n",
|
||||
pre, (long long) stat->nr_samples,
|
||||
(long long) stat->mean, (long long) stat->min,
|
||||
(long long) stat->max);
|
||||
}
|
||||
|
||||
static ssize_t blk_mq_hw_sysfs_stat_show(struct blk_mq_hw_ctx *hctx, char *page)
|
||||
{
|
||||
struct blk_rq_stat stat[2];
|
||||
ssize_t ret;
|
||||
|
||||
blk_stat_init(&stat[BLK_STAT_READ]);
|
||||
blk_stat_init(&stat[BLK_STAT_WRITE]);
|
||||
|
||||
blk_hctx_stat_get(hctx, stat);
|
||||
|
||||
ret = print_stat(page, &stat[BLK_STAT_READ], "read :");
|
||||
ret += print_stat(page + ret, &stat[BLK_STAT_WRITE], "write:");
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct blk_mq_ctx_sysfs_entry blk_mq_sysfs_dispatched = {
|
||||
.attr = {.name = "dispatched", .mode = S_IRUGO },
|
||||
.show = blk_mq_sysfs_dispatched_show,
|
||||
@@ -317,6 +358,11 @@ static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_poll = {
|
||||
.show = blk_mq_hw_sysfs_poll_show,
|
||||
.store = blk_mq_hw_sysfs_poll_store,
|
||||
};
|
||||
static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_stat = {
|
||||
.attr = {.name = "stats", .mode = S_IRUGO | S_IWUSR },
|
||||
.show = blk_mq_hw_sysfs_stat_show,
|
||||
.store = blk_mq_hw_sysfs_stat_store,
|
||||
};
|
||||
|
||||
static struct attribute *default_hw_ctx_attrs[] = {
|
||||
&blk_mq_hw_sysfs_queued.attr,
|
||||
@@ -327,6 +373,7 @@ static struct attribute *default_hw_ctx_attrs[] = {
|
||||
&blk_mq_hw_sysfs_cpus.attr,
|
||||
&blk_mq_hw_sysfs_active.attr,
|
||||
&blk_mq_hw_sysfs_poll.attr,
|
||||
&blk_mq_hw_sysfs_stat.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user