Merge tag 'docs-5.5a' of git://git.lwn.net/linux

Pull Documentation updates from Jonathan Corbet:
 "Here are the main documentation changes for 5.5:

   - Various kerneldoc script enhancements.

   - More RST conversions; those are slowing down as we run out of
     things to convert, but we're a ways from done still.

   - Dan's "maintainer profile entry" work landed at last. Now we just
     need to get maintainers to fill in the profiles...

   - A reworking of the parallel build setup to work better with a
     variety of systems (and to not take over huge systems entirely in
     particular).

   - The MAINTAINERS file is now converted to RST during the build.
     Hopefully nobody ever tries to print this thing, or they will need
     to load a lot of paper.

   - A script and documentation making it easy for maintainers to add
     Link: tags at commit time.

  Also included is the removal of a bunch of spurious CR characters"

* tag 'docs-5.5a' of git://git.lwn.net/linux: (91 commits)
  docs: remove a bunch of stray CRs
  docs: fix up the maintainer profile document
  libnvdimm, MAINTAINERS: Maintainer Entry Profile
  Maintainer Handbook: Maintainer Entry Profile
  MAINTAINERS: Reclaim the P: tag for Maintainer Entry Profile
  docs, parallelism: Rearrange how jobserver reservations are made
  docs, parallelism: Do not leak blocking mode to other readers
  docs, parallelism: Fix failure path and add comment
  Documentation: Remove bootmem_debug from kernel-parameters.txt
  Documentation: security: core.rst: fix warnings
  Documentation/process/howto/kokr: Update for 4.x -> 5.x versioning
  Documentation/translation: Use Korean for Korean translation title
  docs/memory-barriers.txt: Remove remaining references to mmiowb()
  docs/memory-barriers.txt/kokr: Update I/O section to be clearer about CPU vs thread
  docs/memory-barriers.txt/kokr: Fix style, spacing and grammar in I/O section
  Documentation/kokr: Kill all references to mmiowb()
  docs/memory-barriers.txt/kokr: Rewrite "KERNEL I/O BARRIER EFFECTS" section
  docs: Add initial documentation for devfreq
  Documentation: Document how to get links with git am
  docs: Add request_irq() documentation
  ...
This commit is contained in:
Linus Torvalds
2019-12-02 11:51:02 -08:00
89 changed files with 2556 additions and 777 deletions

View File

@@ -156,6 +156,7 @@ Mark Brown <broonie@sirena.org.uk>
Mark Yao <markyao0591@gmail.com> <mark.yao@rock-chips.com>
Martin Kepplinger <martink@posteo.de> <martin.kepplinger@theobroma-systems.com>
Martin Kepplinger <martink@posteo.de> <martin.kepplinger@ginzinger.com>
Martin Kepplinger <martink@posteo.de> <martin.kepplinger@puri.sm>
Mathieu Othacehe <m.othacehe@gmail.com>
Matthew Wilcox <willy@infradead.org> <matthew.r.wilcox@intel.com>
Matthew Wilcox <willy@infradead.org> <matthew@wil.cx>

View File

@@ -1875,8 +1875,9 @@ S: The Netherlands
N: Martin Kepplinger
E: martink@posteo.de
E: martin.kepplinger@ginzinger.com
E: martin.kepplinger@puri.sm
W: http://www.martinkepplinger.com
P: 4096R/5AB387D3 F208 2B88 0F9E 4239 3468 6E3F 5003 98DF 5AB3 87D3
D: mma8452 accelerators iio driver
D: pegasus_notetaker input driver
D: Kernel fixes and cleanups

File diff suppressed because it is too large Load Diff

View File

@@ -13,7 +13,7 @@ endif
SPHINXBUILD = sphinx-build
SPHINXOPTS =
SPHINXDIRS = .
_SPHINXDIRS = $(patsubst $(srctree)/Documentation/%/conf.py,%,$(wildcard $(srctree)/Documentation/*/conf.py))
_SPHINXDIRS = $(patsubst $(srctree)/Documentation/%/index.rst,%,$(wildcard $(srctree)/Documentation/*/index.rst))
SPHINX_CONF = conf.py
PAPER =
BUILDDIR = $(obj)/output
@@ -33,8 +33,6 @@ ifeq ($(HAVE_SPHINX),0)
else # HAVE_SPHINX
export SPHINXOPTS = $(shell perl -e 'open IN,"sphinx-build --version 2>&1 |"; while (<IN>) { if (m/([\d\.]+)/) { print "-jauto" if ($$1 >= "1.7") } ;} close IN')
# User-friendly check for pdflatex and latexmk
HAVE_PDFLATEX := $(shell if which $(PDFLATEX) >/dev/null 2>&1; then echo 1; else echo 0; fi)
HAVE_LATEXMK := $(shell if which latexmk >/dev/null 2>&1; then echo 1; else echo 0; fi)
@@ -67,6 +65,8 @@ quiet_cmd_sphinx = SPHINX $@ --> file://$(abspath $(BUILDDIR)/$3/$4)
cmd_sphinx = $(MAKE) BUILDDIR=$(abspath $(BUILDDIR)) $(build)=Documentation/media $2 && \
PYTHONDONTWRITEBYTECODE=1 \
BUILDDIR=$(abspath $(BUILDDIR)) SPHINX_CONF=$(abspath $(srctree)/$(src)/$5/$(SPHINX_CONF)) \
$(PYTHON) $(srctree)/scripts/jobserver-exec \
$(SHELL) $(srctree)/Documentation/sphinx/parallel-wrapper.sh \
$(SPHINXBUILD) \
-b $2 \
-c $(abspath $(srctree)/$(src)) \

View File

@@ -56,7 +56,7 @@ setid capabilities from the application completely and refactor the process
spawning semantics in the application (e.g. by using a privileged helper program
to do process spawning and UID/GID transitions). Unfortunately, there are a
number of semantics around process spawning that would be affected by this, such
as fork() calls where the program doesn???t immediately call exec() after the
as fork() calls where the program doesn't immediately call exec() after the
fork(), parent processes specifying custom environment variables or command line
args for spawned child processes, or inheritance of file handles across a
fork()/exec(). Because of this, as solution that uses a privileged helper in
@@ -72,7 +72,7 @@ own user namespace, and only approved UIDs/GIDs could be mapped back to the
initial system user namespace, affectively preventing privilege escalation.
Unfortunately, it is not generally feasible to use user namespaces in isolation,
without pairing them with other namespace types, which is not always an option.
Linux checks for capabilities based off of the user namespace that ???owns??? some
Linux checks for capabilities based off of the user namespace that "owns" some
entity. For example, Linux has the notion that network namespaces are owned by
the user namespace in which they were created. A consequence of this is that
capability checks for access to a given network namespace are done by checking

View File

@@ -1120,8 +1120,9 @@ PAGE_SIZE multiple when read back.
Best-effort memory protection. If the memory usage of a
cgroup is within its effective low boundary, the cgroup's
memory won't be reclaimed unless memory can be reclaimed
from unprotected cgroups. Above the effective low boundary (or
memory won't be reclaimed unless there is no reclaimable
memory available in unprotected cgroups.
Above the effective low boundary (or
effective min boundary if it is higher), pages are reclaimed
proportionally to the overage, reducing reclaim pressure for
smaller overages.
@@ -1925,7 +1926,7 @@ Cpuset Interface Files
It accepts only the following input values when written to.
"root" - a paritition root
"root" - a partition root
"member" - a non-root member of a partition
When set to be a partition root, the current cgroup is the

View File

@@ -1,11 +1,11 @@
=============================================================
Usage of the new open sourced rbu (Remote BIOS Update) driver
=============================================================
=========================================
Dell Remote BIOS Update driver (dell_rbu)
=========================================
Purpose
=======
Document demonstrating the use of the Dell Remote BIOS Update driver.
Document demonstrating the use of the Dell Remote BIOS Update driver
for updating BIOS images on Dell servers and desktops.
Scope
@@ -37,7 +37,7 @@ maintains a link list of packets for reading them back.
If the dell_rbu driver is unloaded all the allocated memory is freed.
The rbu driver needs to have an application (as mentioned above)which will
The rbu driver needs to have an application (as mentioned above) which will
inform the BIOS to enable the update in the next system reboot.
The user should not unload the rbu driver after downloading the BIOS image
@@ -71,7 +71,7 @@ be downloaded. It is done as below::
echo XXXX > /sys/devices/platform/dell_rbu/packet_size
In the packet update mechanism, the user needs to create a new file having
packets of data arranged back to back. It can be done as follows
packets of data arranged back to back. It can be done as follows:
The user creates packets header, gets the chunk of the BIOS image and
places it next to the packetheader; now, the packetheader + BIOS image chunk
added together should match the specified packet_size. This makes one
@@ -114,7 +114,7 @@ The entries can be recreated by doing the following::
echo init > /sys/devices/platform/dell_rbu/image_type
.. note:: echoing init in image_type does not change it original value.
.. note:: echoing init in image_type does not change its original value.
Also the driver provides /sys/devices/platform/dell_rbu/data readonly file to
read back the image downloaded.

View File

@@ -31,218 +31,233 @@ configured "bad blocks" will be treated as bad, or bypassed.
This allows the pre-writing of test data and metadata prior to
simulating a "failure" event where bad sectors start to appear.
Table parameters:
-----------------
Table parameters
----------------
<device_path> <offset> <blksz>
Mandatory parameters:
<device_path>: path to the block device.
<offset>: offset to data area from start of device_path
<blksz>: block size in bytes
<device_path>:
Path to the block device.
<offset>:
Offset to data area from start of device_path
<blksz>:
Block size in bytes
(minimum 512, maximum 1073741824, must be a power of 2)
Usage instructions:
-------------------
Usage instructions
------------------
First, find the size (in 512-byte sectors) of the device to be used:
First, find the size (in 512-byte sectors) of the device to be used::
$ sudo blockdev --getsz /dev/vdb1
33552384
$ sudo blockdev --getsz /dev/vdb1
33552384
Create the dm-dust device:
(For a device with a block size of 512 bytes)
$ sudo dmsetup create dust1 --table '0 33552384 dust /dev/vdb1 0 512'
::
$ sudo dmsetup create dust1 --table '0 33552384 dust /dev/vdb1 0 512'
(For a device with a block size of 4096 bytes)
$ sudo dmsetup create dust1 --table '0 33552384 dust /dev/vdb1 0 4096'
::
$ sudo dmsetup create dust1 --table '0 33552384 dust /dev/vdb1 0 4096'
Check the status of the read behavior ("bypass" indicates that all I/O
will be passed through to the underlying device):
$ sudo dmsetup status dust1
0 33552384 dust 252:17 bypass
will be passed through to the underlying device)::
$ sudo dd if=/dev/mapper/dust1 of=/dev/null bs=512 count=128 iflag=direct
128+0 records in
128+0 records out
$ sudo dmsetup status dust1
0 33552384 dust 252:17 bypass
$ sudo dd if=/dev/zero of=/dev/mapper/dust1 bs=512 count=128 oflag=direct
128+0 records in
128+0 records out
$ sudo dd if=/dev/mapper/dust1 of=/dev/null bs=512 count=128 iflag=direct
128+0 records in
128+0 records out
Adding and removing bad blocks:
-------------------------------
$ sudo dd if=/dev/zero of=/dev/mapper/dust1 bs=512 count=128 oflag=direct
128+0 records in
128+0 records out
Adding and removing bad blocks
------------------------------
At any time (i.e.: whether the device has the "bad block" emulation
enabled or disabled), bad blocks may be added or removed from the
device via the "addbadblock" and "removebadblock" messages:
device via the "addbadblock" and "removebadblock" messages::
$ sudo dmsetup message dust1 0 addbadblock 60
kernel: device-mapper: dust: badblock added at block 60
$ sudo dmsetup message dust1 0 addbadblock 60
kernel: device-mapper: dust: badblock added at block 60
$ sudo dmsetup message dust1 0 addbadblock 67
kernel: device-mapper: dust: badblock added at block 67
$ sudo dmsetup message dust1 0 addbadblock 67
kernel: device-mapper: dust: badblock added at block 67
$ sudo dmsetup message dust1 0 addbadblock 72
kernel: device-mapper: dust: badblock added at block 72
$ sudo dmsetup message dust1 0 addbadblock 72
kernel: device-mapper: dust: badblock added at block 72
These bad blocks will be stored in the "bad block list".
While the device is in "bypass" mode, reads and writes will succeed:
While the device is in "bypass" mode, reads and writes will succeed::
$ sudo dmsetup status dust1
0 33552384 dust 252:17 bypass
$ sudo dmsetup status dust1
0 33552384 dust 252:17 bypass
Enabling block read failures:
-----------------------------
Enabling block read failures
----------------------------
To enable the "fail read on bad block" behavior, send the "enable" message:
To enable the "fail read on bad block" behavior, send the "enable" message::
$ sudo dmsetup message dust1 0 enable
kernel: device-mapper: dust: enabling read failures on bad sectors
$ sudo dmsetup message dust1 0 enable
kernel: device-mapper: dust: enabling read failures on bad sectors
$ sudo dmsetup status dust1
0 33552384 dust 252:17 fail_read_on_bad_block
$ sudo dmsetup status dust1
0 33552384 dust 252:17 fail_read_on_bad_block
With the device in "fail read on bad block" mode, attempting to read a
block will encounter an "Input/output error":
block will encounter an "Input/output error"::
$ sudo dd if=/dev/mapper/dust1 of=/dev/null bs=512 count=1 skip=67 iflag=direct
dd: error reading '/dev/mapper/dust1': Input/output error
0+0 records in
0+0 records out
0 bytes copied, 0.00040651 s, 0.0 kB/s
$ sudo dd if=/dev/mapper/dust1 of=/dev/null bs=512 count=1 skip=67 iflag=direct
dd: error reading '/dev/mapper/dust1': Input/output error
0+0 records in
0+0 records out
0 bytes copied, 0.00040651 s, 0.0 kB/s
...and writing to the bad blocks will remove the blocks from the list,
therefore emulating the "remap" behavior of hard disk drives:
therefore emulating the "remap" behavior of hard disk drives::
$ sudo dd if=/dev/zero of=/dev/mapper/dust1 bs=512 count=128 oflag=direct
128+0 records in
128+0 records out
$ sudo dd if=/dev/zero of=/dev/mapper/dust1 bs=512 count=128 oflag=direct
128+0 records in
128+0 records out
kernel: device-mapper: dust: block 60 removed from badblocklist by write
kernel: device-mapper: dust: block 67 removed from badblocklist by write
kernel: device-mapper: dust: block 72 removed from badblocklist by write
kernel: device-mapper: dust: block 87 removed from badblocklist by write
kernel: device-mapper: dust: block 60 removed from badblocklist by write
kernel: device-mapper: dust: block 67 removed from badblocklist by write
kernel: device-mapper: dust: block 72 removed from badblocklist by write
kernel: device-mapper: dust: block 87 removed from badblocklist by write
Bad block add/remove error handling:
------------------------------------
Bad block add/remove error handling
-----------------------------------
Attempting to add a bad block that already exists in the list will
result in an "Invalid argument" error, as well as a helpful message:
result in an "Invalid argument" error, as well as a helpful message::
$ sudo dmsetup message dust1 0 addbadblock 88
device-mapper: message ioctl on dust1 failed: Invalid argument
kernel: device-mapper: dust: block 88 already in badblocklist
$ sudo dmsetup message dust1 0 addbadblock 88
device-mapper: message ioctl on dust1 failed: Invalid argument
kernel: device-mapper: dust: block 88 already in badblocklist
Attempting to remove a bad block that doesn't exist in the list will
result in an "Invalid argument" error, as well as a helpful message:
result in an "Invalid argument" error, as well as a helpful message::
$ sudo dmsetup message dust1 0 removebadblock 87
device-mapper: message ioctl on dust1 failed: Invalid argument
kernel: device-mapper: dust: block 87 not found in badblocklist
$ sudo dmsetup message dust1 0 removebadblock 87
device-mapper: message ioctl on dust1 failed: Invalid argument
kernel: device-mapper: dust: block 87 not found in badblocklist
Counting the number of bad blocks in the bad block list:
--------------------------------------------------------
Counting the number of bad blocks in the bad block list
-------------------------------------------------------
To count the number of bad blocks configured in the device, run the
following message command:
following message command::
$ sudo dmsetup message dust1 0 countbadblocks
$ sudo dmsetup message dust1 0 countbadblocks
A message will print with the number of bad blocks currently
configured on the device:
configured on the device::
kernel: device-mapper: dust: countbadblocks: 895 badblock(s) found
kernel: device-mapper: dust: countbadblocks: 895 badblock(s) found
Querying for specific bad blocks:
---------------------------------
Querying for specific bad blocks
--------------------------------
To find out if a specific block is in the bad block list, run the
following message command:
following message command::
$ sudo dmsetup message dust1 0 queryblock 72
$ sudo dmsetup message dust1 0 queryblock 72
The following message will print if the block is in the list:
device-mapper: dust: queryblock: block 72 found in badblocklist
The following message will print if the block is in the list::
The following message will print if the block is in the list:
device-mapper: dust: queryblock: block 72 not found in badblocklist
device-mapper: dust: queryblock: block 72 found in badblocklist
The following message will print if the block is not in the list::
device-mapper: dust: queryblock: block 72 not found in badblocklist
The "queryblock" message command will work in both the "enabled"
and "disabled" modes, allowing the verification of whether a block
will be treated as "bad" without having to issue I/O to the device,
or having to "enable" the bad block emulation.
Clearing the bad block list:
----------------------------
Clearing the bad block list
---------------------------
To clear the bad block list (without needing to individually run
a "removebadblock" message command for every block), run the
following message command:
following message command::
$ sudo dmsetup message dust1 0 clearbadblocks
$ sudo dmsetup message dust1 0 clearbadblocks
After clearing the bad block list, the following message will appear:
After clearing the bad block list, the following message will appear::
kernel: device-mapper: dust: clearbadblocks: badblocks cleared
kernel: device-mapper: dust: clearbadblocks: badblocks cleared
If there were no bad blocks to clear, the following message will
appear:
appear::
kernel: device-mapper: dust: clearbadblocks: no badblocks found
kernel: device-mapper: dust: clearbadblocks: no badblocks found
Message commands list:
----------------------
Message commands list
---------------------
Below is a list of the messages that can be sent to a dust device:
Operations on blocks (requires a <blknum> argument):
Operations on blocks (requires a <blknum> argument)::
addbadblock <blknum>
queryblock <blknum>
removebadblock <blknum>
addbadblock <blknum>
queryblock <blknum>
removebadblock <blknum>
...where <blknum> is a block number within range of the device
(corresponding to the block size of the device.)
(corresponding to the block size of the device.)
Single argument message commands:
Single argument message commands::
countbadblocks
clearbadblocks
disable
enable
quiet
countbadblocks
clearbadblocks
disable
enable
quiet
Device removal:
---------------
Device removal
--------------
When finished, remove the device via the "dmsetup remove" command:
When finished, remove the device via the "dmsetup remove" command::
$ sudo dmsetup remove dust1
$ sudo dmsetup remove dust1
Quiet mode:
-----------
Quiet mode
----------
On test runs with many bad blocks, it may be desirable to avoid
excessive logging (from bad blocks added, removed, or "remapped").
This can be done by enabling "quiet mode" via the following message:
This can be done by enabling "quiet mode" via the following message::
$ sudo dmsetup message dust1 0 quiet
$ sudo dmsetup message dust1 0 quiet
This will suppress log messages from add / remove / removed by write
operations. Log messages from "countbadblocks" or "queryblock"
message commands will still print in quiet mode.
The status of quiet mode can be seen by running "dmsetup status":
The status of quiet mode can be seen by running "dmsetup status"::
$ sudo dmsetup status dust1
0 33552384 dust 252:17 fail_read_on_bad_block quiet
$ sudo dmsetup status dust1
0 33552384 dust 252:17 fail_read_on_bad_block quiet
To disable quiet mode, send the "quiet" message again:
To disable quiet mode, send the "quiet" message again::
$ sudo dmsetup message dust1 0 quiet
$ sudo dmsetup message dust1 0 quiet
$ sudo dmsetup status dust1
0 33552384 dust 252:17 fail_read_on_bad_block verbose
$ sudo dmsetup status dust1
0 33552384 dust 252:17 fail_read_on_bad_block verbose
(The presence of "verbose" indicates normal logging.)

View File

@@ -9,6 +9,7 @@ Device Mapper
cache
delay
dm-crypt
dm-dust
dm-flakey
dm-init
dm-integrity

View File

@@ -57,60 +57,61 @@ configure specific aspects of kernel behavior to your liking.
.. toctree::
:maxdepth: 1
initrd
cgroup-v2
cgroup-v1/index
serial-console
braille-console
parport
md
module-signing
rapidio
sysrq
unicode
vga-softcursor
binfmt-misc
mono
java
ras
bcache
blockdev/index
ext4
binderfs
cifs/index
xfs
jfs
ufs
pm/index
thunderbolt
LSM/index
mm/index
namespaces/index
perf-security
acpi/index
aoe/index
auxdisplay/index
bcache
binderfs
binfmt-misc
blockdev/index
braille-console
btmrvl
cgroup-v1/index
cgroup-v2
cifs/index
clearing-warn-once
cpu-load
cputopology
dell_rbu
device-mapper/index
efi-stub
ext4
gpio/index
highuid
hw_random
initrd
iostats
java
jfs
kernel-per-CPU-kthreads
laptops/index
auxdisplay/index
lcd-panel-cgram
ldm
lockup-watchdogs
LSM/index
md
mm/index
module-signing
mono
namespaces/index
numastat
parport
perf-security
pm/index
pnp
rapidio
ras
rtc
serial-console
svga
wimax/index
sysrq
thunderbolt
ufs
unicode
vga-softcursor
video-output
wimax/index
xfs
.. only:: subproject and html

View File

@@ -46,78 +46,79 @@ each snapshot of your disk statistics.
In 2.4, the statistics fields are those after the device name. In
the above example, the first field of statistics would be 446216.
By contrast, in 2.6+ if you look at ``/sys/block/hda/stat``, you'll
find just the eleven fields, beginning with 446216. If you look at
``/proc/diskstats``, the eleven fields will be preceded by the major and
find just the 15 fields, beginning with 446216. If you look at
``/proc/diskstats``, the 15 fields will be preceded by the major and
minor device numbers, and device name. Each of these formats provides
eleven fields of statistics, each meaning exactly the same things.
15 fields of statistics, each meaning exactly the same things.
All fields except field 9 are cumulative since boot. Field 9 should
go to zero as I/Os complete; all others only increase (unless they
overflow and wrap). Yes, these are (32-bit or 64-bit) unsigned long
(native word size) numbers, and on a very busy or long-lived system they
may wrap. Applications should be prepared to deal with that; unless
your observations are measured in large numbers of minutes or hours,
they should not wrap twice before you notice them.
overflow and wrap). Wrapping might eventually occur on a very busy
or long-lived system; so applications should be prepared to deal with
it. Regarding wrapping, the types of the fields are either unsigned
int (32 bit) or unsigned long (32-bit or 64-bit, depending on your
machine) as noted per-field below. Unless your observations are very
spread in time, these fields should not wrap twice before you notice it.
Each set of stats only applies to the indicated device; if you want
system-wide stats you'll have to find all the devices and sum them all up.
Field 1 -- # of reads completed
Field 1 -- # of reads completed (unsigned long)
This is the total number of reads completed successfully.
Field 2 -- # of reads merged, field 6 -- # of writes merged
Field 2 -- # of reads merged, field 6 -- # of writes merged (unsigned long)
Reads and writes which are adjacent to each other may be merged for
efficiency. Thus two 4K reads may become one 8K read before it is
ultimately handed to the disk, and so it will be counted (and queued)
as only one I/O. This field lets you know how often this was done.
Field 3 -- # of sectors read
Field 3 -- # of sectors read (unsigned long)
This is the total number of sectors read successfully.
Field 4 -- # of milliseconds spent reading
Field 4 -- # of milliseconds spent reading (unsigned int)
This is the total number of milliseconds spent by all reads (as
measured from __make_request() to end_that_request_last()).
Field 5 -- # of writes completed
Field 5 -- # of writes completed (unsigned long)
This is the total number of writes completed successfully.
Field 6 -- # of writes merged
Field 6 -- # of writes merged (unsigned long)
See the description of field 2.
Field 7 -- # of sectors written
Field 7 -- # of sectors written (unsigned long)
This is the total number of sectors written successfully.
Field 8 -- # of milliseconds spent writing
Field 8 -- # of milliseconds spent writing (unsigned int)
This is the total number of milliseconds spent by all writes (as
measured from __make_request() to end_that_request_last()).
Field 9 -- # of I/Os currently in progress
Field 9 -- # of I/Os currently in progress (unsigned int)
The only field that should go to zero. Incremented as requests are
given to appropriate struct request_queue and decremented as they finish.
Field 10 -- # of milliseconds spent doing I/Os
Field 10 -- # of milliseconds spent doing I/Os (unsigned int)
This field increases so long as field 9 is nonzero.
Since 5.0 this field counts jiffies when at least one request was
started or completed. If request runs more than 2 jiffies then some
I/O time will not be accounted unless there are other requests.
Field 11 -- weighted # of milliseconds spent doing I/Os
Field 11 -- weighted # of milliseconds spent doing I/Os (unsigned int)
This field is incremented at each I/O start, I/O completion, I/O
merge, or read of these stats by the number of I/Os in progress
(field 9) times the number of milliseconds spent doing I/O since the
last update of this field. This can provide an easy measure of both
I/O completion time and the backlog that may be accumulating.
Field 12 -- # of discards completed
Field 12 -- # of discards completed (unsigned long)
This is the total number of discards completed successfully.
Field 13 -- # of discards merged
Field 13 -- # of discards merged (unsigned long)
See the description of field 2
Field 14 -- # of sectors discarded
Field 14 -- # of sectors discarded (unsigned long)
This is the total number of sectors discarded successfully.
Field 15 -- # of milliseconds spent discarding
Field 15 -- # of milliseconds spent discarding (unsigned int)
This is the total number of milliseconds spent by all discards (as
measured from __make_request() to end_that_request_last()).

View File

@@ -437,8 +437,6 @@
no delay (0).
Format: integer
bootmem_debug [KNL] Enable bootmem allocator debug messages.
bert_disable [ACPI]
Disable BERT OS support on buggy BIOSes.
@@ -983,12 +981,10 @@
earlycon= [KNL] Output early console device and options.
[ARM64] The early console is determined by the
stdout-path property in device tree's chosen node,
or determined by the ACPI SPCR table.
[X86] When used with no options the early console is
determined by the ACPI SPCR table.
When used with no options, the early console is
determined by stdout-path property in device tree's
chosen node or the ACPI SPCR table if supported by
the platform.
cdns,<addr>[,options]
Start an early, polled-mode console on a Cadence

View File

@@ -19,7 +19,9 @@ devices/imx8_ddr0/format/. The "events" directory describes the events types
hardware supported that can be used with perf tool, see /sys/bus/event_source/
devices/imx8_ddr0/events/. The "caps" directory describes filter features implemented
in DDR PMU, see /sys/bus/events_source/devices/imx8_ddr0/caps/.
e.g.::
.. code-block:: bash
perf stat -a -e imx8_ddr0/cycles/ cmd
perf stat -a -e imx8_ddr0/read/,imx8_ddr0/write/ cmd
@@ -35,24 +37,31 @@ value 1 for supported.
Filter is defined with two configuration parts:
--AXI_ID defines AxID matching value.
--AXI_MASKING defines which bits of AxID are meaningful for the matching.
0corresponding bit is masked.
1: corresponding bit is not masked, i.e. used to do the matching.
- 0: corresponding bit is masked.
- 1: corresponding bit is not masked, i.e. used to do the matching.
AXI_ID and AXI_MASKING are mapped on DPCR1 register in performance counter.
When non-masked bits are matching corresponding AXI_ID bits then counter is
incremented. Perf counter is incremented if
AxID && AXI_MASKING == AXI_ID && AXI_MASKING
AxID && AXI_MASKING == AXI_ID && AXI_MASKING
This filter doesn't support filter different AXI ID for axid-read and axid-write
event at the same time as this filter is shared between counters.
e.g.::
perf stat -a -e imx8_ddr0/axid-read,axi_mask=0xMMMM,axi_id=0xDDDD/ cmd
perf stat -a -e imx8_ddr0/axid-write,axi_mask=0xMMMM,axi_id=0xDDDD/ cmd
NOTE: axi_mask is inverted in userspace(i.e. set bits are bits to mask), and
it will be reverted in driver automatically. so that the user can just specify
axi_id to monitor a specific id, rather than having to specify axi_mask.
e.g.::
.. code-block:: bash
perf stat -a -e imx8_ddr0/axid-read,axi_mask=0xMMMM,axi_id=0xDDDD/ cmd
perf stat -a -e imx8_ddr0/axid-write,axi_mask=0xMMMM,axi_id=0xDDDD/ cmd
.. note::
axi_mask is inverted in userspace(i.e. set bits are bits to mask), and
it will be reverted in driver automatically. so that the user can just specify
axi_id to monitor a specific id, rather than having to specify axi_mask.
.. code-block:: bash
perf stat -a -e imx8_ddr0/axid-read,axi_id=0x12/ cmd, which will monitor ARID=0x12
* With DDR_CAP_AXI_ID_FILTER_ENHANCED quirk(filter: 1, enhanced_filter: 1).

View File

@@ -8,6 +8,7 @@ Performance monitor support
:maxdepth: 1
hisi-pmu
imx-ddr
qcom_l2_pmu
qcom_l3_pmu
arm-ccn

View File

@@ -831,8 +831,8 @@ printk_ratelimit:
=================
Some warning messages are rate limited. printk_ratelimit specifies
the minimum length of time between these messages (in jiffies), by
default we allow one every 5 seconds.
the minimum length of time between these messages (in seconds).
The default value is 5 seconds.
A value of 0 will disable rate limiting.
@@ -845,6 +845,8 @@ seconds, we do allow a burst of messages to pass through.
printk_ratelimit_burst specifies the number of messages we can
send before ratelimiting kicks in.
The default value is 10 messages.
printk_devkmsg:
===============
@@ -1101,7 +1103,7 @@ During initialization the kernel sets this value such that even if the
maximum number of threads is created, the thread structures occupy only
a part (1/8th) of the available RAM pages.
The minimum value that can be written to threads-max is 20.
The minimum value that can be written to threads-max is 1.
The maximum value that can be written to threads-max is given by the
constant FUTEX_TID_MASK (0x3fffffff).
@@ -1109,10 +1111,6 @@ constant FUTEX_TID_MASK (0x3fffffff).
If a value outside of this range is written to threads-max an error
EINVAL occurs.
The value written is checked against the available RAM pages. If the
thread structures would occupy too much (more than 1/8th) of the
available RAM pages threads-max is reduced accordingly.
unknown_nmi_panic:
==================

View File

@@ -37,7 +37,8 @@ needs_sphinx = '1.3'
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['kerneldoc', 'rstFlatTable', 'kernel_include', 'cdomain',
'kfigure', 'sphinx.ext.ifconfig', 'automarkup']
'kfigure', 'sphinx.ext.ifconfig', 'automarkup',
'maintainers_include']
# The name of the math extension changed on Sphinx 1.4
if (major == 1 and minor > 3) or (major > 1):

View File

@@ -23,7 +23,7 @@ begins with the creation of a pool using one of:
.. kernel-doc:: lib/genalloc.c
:functions: devm_gen_pool_create
A call to :c:func:`gen_pool_create` will create a pool. The granularity of
A call to gen_pool_create() will create a pool. The granularity of
allocations is set with min_alloc_order; it is a log-base-2 number like
those used by the page allocator, but it refers to bytes rather than pages.
So, if min_alloc_order is passed as 3, then all allocations will be a
@@ -32,7 +32,7 @@ required to track the memory in the pool. The nid parameter specifies
which NUMA node should be used for the allocation of the housekeeping
structures; it can be -1 if the caller doesn't care.
The "managed" interface :c:func:`devm_gen_pool_create` ties the pool to a
The "managed" interface devm_gen_pool_create() ties the pool to a
specific device. Among other things, it will automatically clean up the
pool when the given device is destroyed.
@@ -53,32 +53,32 @@ to the pool. That can be done with one of:
:functions: gen_pool_add
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_add_virt
:functions: gen_pool_add_owner
A call to :c:func:`gen_pool_add` will place the size bytes of memory
A call to gen_pool_add() will place the size bytes of memory
starting at addr (in the kernel's virtual address space) into the given
pool, once again using nid as the node ID for ancillary memory allocations.
The :c:func:`gen_pool_add_virt` variant associates an explicit physical
The gen_pool_add_virt() variant associates an explicit physical
address with the memory; this is only necessary if the pool will be used
for DMA allocations.
The functions for allocating memory from the pool (and putting it back)
are:
.. kernel-doc:: lib/genalloc.c
.. kernel-doc:: include/linux/genalloc.h
:functions: gen_pool_alloc
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_dma_alloc
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_free
:functions: gen_pool_free_owner
As one would expect, :c:func:`gen_pool_alloc` will allocate size< bytes
from the given pool. The :c:func:`gen_pool_dma_alloc` variant allocates
As one would expect, gen_pool_alloc() will allocate size< bytes
from the given pool. The gen_pool_dma_alloc() variant allocates
memory for use with DMA operations, returning the associated physical
address in the space pointed to by dma. This will only work if the memory
was added with :c:func:`gen_pool_add_virt`. Note that this function
was added with gen_pool_add_virt(). Note that this function
departs from the usual genpool pattern of using unsigned long values to
represent kernel addresses; it returns a void * instead.
@@ -89,14 +89,14 @@ return. If that sort of control is needed, the following functions will be
of interest:
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_alloc_algo
:functions: gen_pool_alloc_algo_owner
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_set_algo
Allocations with :c:func:`gen_pool_alloc_algo` specify an algorithm to be
Allocations with gen_pool_alloc_algo() specify an algorithm to be
used to choose the memory to be allocated; the default algorithm can be set
with :c:func:`gen_pool_set_algo`. The data value is passed to the
with gen_pool_set_algo(). The data value is passed to the
algorithm; most ignore it, but it is occasionally needed. One can,
naturally, write a special-purpose algorithm, but there is a fair set
already available:

View File

@@ -26,7 +26,7 @@ Rationale
=========
The original implementation of interrupt handling in Linux uses the
:c:func:`__do_IRQ` super-handler, which is able to deal with every type of
__do_IRQ() super-handler, which is able to deal with every type of
interrupt logic.
Originally, Russell King identified different types of handlers to build
@@ -43,7 +43,7 @@ During the implementation we identified another type:
- Fast EOI type
In the SMP world of the :c:func:`__do_IRQ` super-handler another type was
In the SMP world of the __do_IRQ() super-handler another type was
identified:
- Per CPU type
@@ -83,7 +83,7 @@ IRQ-flow implementation for 'level type' interrupts and add a
(sub)architecture specific 'edge type' implementation.
To make the transition to the new model easier and prevent the breakage
of existing implementations, the :c:func:`__do_IRQ` super-handler is still
of existing implementations, the __do_IRQ() super-handler is still
available. This leads to a kind of duality for the time being. Over time
the new model should be used in more and more architectures, as it
enables smaller and cleaner IRQ subsystems. It's deprecated for three
@@ -116,7 +116,7 @@ status information and pointers to the interrupt flow method and the
interrupt chip structure which are assigned to this interrupt.
Whenever an interrupt triggers, the low-level architecture code calls
into the generic interrupt code by calling :c:func:`desc->handle_irq`. This
into the generic interrupt code by calling desc->handle_irq(). This
high-level IRQ handling function only uses desc->irq_data.chip
primitives referenced by the assigned chip descriptor structure.
@@ -125,27 +125,29 @@ High-level Driver API
The high-level Driver API consists of following functions:
- :c:func:`request_irq`
- request_irq()
- :c:func:`free_irq`
- request_threaded_irq()
- :c:func:`disable_irq`
- free_irq()
- :c:func:`enable_irq`
- disable_irq()
- :c:func:`disable_irq_nosync` (SMP only)
- enable_irq()
- :c:func:`synchronize_irq` (SMP only)
- disable_irq_nosync() (SMP only)
- :c:func:`irq_set_irq_type`
- synchronize_irq() (SMP only)
- :c:func:`irq_set_irq_wake`
- irq_set_irq_type()
- :c:func:`irq_set_handler_data`
- irq_set_irq_wake()
- :c:func:`irq_set_chip`
- irq_set_handler_data()
- :c:func:`irq_set_chip_data`
- irq_set_chip()
- irq_set_chip_data()
See the autogenerated function documentation for details.
@@ -154,19 +156,19 @@ High-level IRQ flow handlers
The generic layer provides a set of pre-defined irq-flow methods:
- :c:func:`handle_level_irq`
- handle_level_irq()
- :c:func:`handle_edge_irq`
- handle_edge_irq()
- :c:func:`handle_fasteoi_irq`
- handle_fasteoi_irq()
- :c:func:`handle_simple_irq`
- handle_simple_irq()
- :c:func:`handle_percpu_irq`
- handle_percpu_irq()
- :c:func:`handle_edge_eoi_irq`
- handle_edge_eoi_irq()
- :c:func:`handle_bad_irq`
- handle_bad_irq()
The interrupt flow handlers (either pre-defined or architecture
specific) are assigned to specific interrupts by the architecture either
@@ -325,14 +327,14 @@ Delayed interrupt disable
This per interrupt selectable feature, which was introduced by Russell
King in the ARM interrupt implementation, does not mask an interrupt at
the hardware level when :c:func:`disable_irq` is called. The interrupt is kept
the hardware level when disable_irq() is called. The interrupt is kept
enabled and is masked in the flow handler when an interrupt event
happens. This prevents losing edge interrupts on hardware which does not
store an edge interrupt event while the interrupt is disabled at the
hardware level. When an interrupt arrives while the IRQ_DISABLED flag
is set, then the interrupt is masked at the hardware level and the
IRQ_PENDING bit is set. When the interrupt is re-enabled by
:c:func:`enable_irq` the pending bit is checked and if it is set, the interrupt
enable_irq() the pending bit is checked and if it is set, the interrupt
is resent either via hardware or by a software resend mechanism. (It's
necessary to enable CONFIG_HARDIRQS_SW_RESEND when you want to use
the delayed interrupt disable feature and your hardware is not capable
@@ -369,7 +371,7 @@ handler(s) to use these basic units of low-level functionality.
__do_IRQ entry point
====================
The original implementation :c:func:`__do_IRQ` was an alternative entry point
The original implementation __do_IRQ() was an alternative entry point
for all types of interrupts. It no longer exists.
This handler turned out to be not suitable for all interrupt hardware

View File

@@ -88,10 +88,11 @@ Selecting memory allocator
==========================
The most straightforward way to allocate memory is to use a function
from the :c:func:`kmalloc` family. And, to be on the safe size it's
best to use routines that set memory to zero, like
:c:func:`kzalloc`. If you need to allocate memory for an array, there
are :c:func:`kmalloc_array` and :c:func:`kcalloc` helpers.
from the kmalloc() family. And, to be on the safe side it's best to use
routines that set memory to zero, like kzalloc(). If you need to
allocate memory for an array, there are kmalloc_array() and kcalloc()
helpers. The helpers struct_size(), array_size() and array3_size() can
be used to safely calculate object sizes without overflowing.
The maximal size of a chunk that can be allocated with `kmalloc` is
limited. The actual limit depends on the hardware and the kernel
@@ -102,29 +103,26 @@ The address of a chunk allocated with `kmalloc` is aligned to at least
ARCH_KMALLOC_MINALIGN bytes. For sizes which are a power of two, the
alignment is also guaranteed to be at least the respective size.
For large allocations you can use :c:func:`vmalloc` and
:c:func:`vzalloc`, or directly request pages from the page
allocator. The memory allocated by `vmalloc` and related functions is
not physically contiguous.
For large allocations you can use vmalloc() and vzalloc(), or directly
request pages from the page allocator. The memory allocated by `vmalloc`
and related functions is not physically contiguous.
If you are not sure whether the allocation size is too large for
`kmalloc`, it is possible to use :c:func:`kvmalloc` and its
derivatives. It will try to allocate memory with `kmalloc` and if the
allocation fails it will be retried with `vmalloc`. There are
restrictions on which GFP flags can be used with `kvmalloc`; please
see :c:func:`kvmalloc_node` reference documentation. Note that
`kvmalloc` may return memory that is not physically contiguous.
`kmalloc`, it is possible to use kvmalloc() and its derivatives. It will
try to allocate memory with `kmalloc` and if the allocation fails it
will be retried with `vmalloc`. There are restrictions on which GFP
flags can be used with `kvmalloc`; please see kvmalloc_node() reference
documentation. Note that `kvmalloc` may return memory that is not
physically contiguous.
If you need to allocate many identical objects you can use the slab
cache allocator. The cache should be set up with
:c:func:`kmem_cache_create` or :c:func:`kmem_cache_create_usercopy`
before it can be used. The second function should be used if a part of
the cache might be copied to the userspace. After the cache is
created :c:func:`kmem_cache_alloc` and its convenience wrappers can
allocate memory from that cache.
cache allocator. The cache should be set up with kmem_cache_create() or
kmem_cache_create_usercopy() before it can be used. The second function
should be used if a part of the cache might be copied to the userspace.
After the cache is created kmem_cache_alloc() and its convenience
wrappers can allocate memory from that cache.
When the allocated memory is no longer needed it must be freed. You
can use :c:func:`kvfree` for the memory allocated with `kmalloc`,
`vmalloc` and `kvmalloc`. The slab caches should be freed with
:c:func:`kmem_cache_free`. And don't forget to destroy the cache with
:c:func:`kmem_cache_destroy`.
When the allocated memory is no longer needed it must be freed. You can
use kvfree() for the memory allocated with `kmalloc`, `vmalloc` and
`kvmalloc`. The slab caches should be freed with kmem_cache_free(). And
don't forget to destroy the cache with kmem_cache_destroy().

View File

@@ -11,7 +11,7 @@ User Space Memory Access
.. kernel-doc:: arch/x86/lib/usercopy_32.c
:export:
.. kernel-doc:: mm/util.c
.. kernel-doc:: mm/gup.c
:functions: get_user_pages_fast
.. _mm-api-gfp-flags:

Some files were not shown because too many files have changed in this diff Show More