Commit Graph

203 Commits

Author SHA1 Message Date
Stefan Brüns
c2cbd4276e dmaengine: Mark struct dma_slave_caps kernel-doc correctly, clarify
struct dma_slave_caps documentation omitted the correct kernel-doc
opening comment mark.

Document byte granularity and interpretation of the src/dst_addr_widths
bit flag fields used by struct dma_slave_caps and struct dma_device.

Add punctuation to their "directions" member documentations, and cleanup
wording of the description.

Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-21 22:34:28 +05:30
Stefan Brüns
3f7632e1ba dmaengine: List all allowed values for src/dst_addr_width in kernel doc
Commit 93c6ee94c1 ("dma: Support for 3 bytes word size") and
commit 534a729866 ("dmaengine: Add 16 bytes, 32 bytes and 64 bytes
bus widths") added additional values for the allowed word size, but
omitted these from the struct dma_slave_config documentation.

Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-21 22:34:28 +05:30
Vinod Koul
41bd0314fa Merge branch 'topic/dmatest' into for-linus 2017-09-06 21:55:10 +05:30
Abhishek Sahu
3e00ab4ac5 dmaengine: add DMA_PREP_CMD for non-Data descriptors.
Some of the DMA controllers are capable of issuing the commands
to peripheral by the DMA. These commands can be list of register
reads/writes and its different from normal data reads/writes.
This patch adds new flag DMA_PREP_CMD in DMA API which tells
the driver that the data passed to DMA API is command data
and DMA controller driver will form descriptor in the required
format.

This flag can be used by any DMA controller driver which requires
the descriptor in different format for non-Data descriptors.

Signed-off-by: Abhishek Sahu <absahu@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:40:18 +05:30
Dave Jiang
c678fa6634 dmaengine: remove DMA_SG as it is dead code in kernel
There are no in kernel consumers for DMA_SG op. Removing operation,
dead code, and test code in dmatest.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Cc: Gary Hook <gary.hook@amd.com>
Cc: Ludovic Desroches <ludovic.desroches@microchip.com>
Cc: Kedareswara rao Appana <appana.durga.rao@xilinx.com>
Cc: Li Yang <leoyang.li@nxp.com>
Cc: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-22 09:22:11 +05:30
Boris Brezillon
77d65d6f3d dmaengine: Provide a wrapper for memcpy operations
Almost all ->device_prep_dma_xx() methods have a wrapper defined in
dmaengine.h. Add one for  ->device_prep_dma_memcpy().

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-01-31 09:03:20 +05:30
Peter Ujfalusi
54cd255808 dmaengine: dma_slave_config: add support for slave port window
Some slave devices uses address window instead of single register for read
and/or write of data. With the src/dst_port_window_size the address window
can be specified and the DMAengine driver should use this information to
correctly set up the transfer to loop within the provided window.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-11-30 08:54:04 +05:30
Dave Jiang
f067025bc6 dmaengine: add support to provide error result from a DMA transation
Adding a new callback that will provide the error result for a transaction.
The result is allocated on the stack and the callback should create a copy
if it wishes to retain the information after exiting. The result parameter
is now defined and takes over the dummy void pointer we placed in the
helper functions previously. dmaengine drivers should start converting
to the new "callback_result" callback in order to receive transaction
results.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-08-08 08:11:42 +05:30
Vinod Koul
757d12e584 dmaengine: ensure dmaengine helpers check valid callback
dmaengine has various device callbacks and exposes helper
functions to invoke these. These helpers should check if channel,
device and callback is valid or not before invoking them.

Reported-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-04-12 21:07:06 +05:30
Vinod Koul
8bce4c8765 Merge branch 'topic/pl330' into for-linus 2016-03-14 11:18:12 +05:30
Vinod Koul
9575632052 dmaengine: make slave address physical
The slave dmaengine semantics required the client to map dma
addresses and pass DMA address to dmaengine drivers. This
was a convenient notion coming from generic dma offload cases
where dmaengines are interchangeable and client is not aware of
which engine to map to.

But in case of slave, we know the dmaengine and always use a
specific one. Further the IOMMU cases can lead to failure of this
notion, so make this as physical address and now dmaengine driver
will do the required mapping.

Original-patch-by: Linus Walleij <linus.walleij@linaro.org>
Original-patch-Acked-by: Lee Jones <lee.jones@linaro.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Tested-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Tested-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-02-22 21:22:40 +05:30
Shawn Lin
6d5bbed30f dmaengine: core: expose max burst capability to clients
This patch add max_burst to dma_get_slave_caps for clients
to get the burst capability of slave dma controller.

Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com>
Signed-off-by: Caesar Wang <wxt@rock-chips.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2016-02-09 09:01:41 +05:30
Vinod Koul
d3f1e93ce8 Merge branch 'topic/async' into for-linus 2016-01-06 15:17:47 +05:30
Vinod Koul
7c7b680fa6 Merge branch 'topic/univ_api' into for-linus 2016-01-06 15:17:32 +05:30
Peter Ujfalusi
a8135d0d79 dmaengine: core: Introduce new, universal API to request a channel
The two API function can cover most, if not all current APIs used to
request a channel. With minimal effort dmaengine drivers, platforms and
dmaengine user drivers can be converted to use the two function.

struct dma_chan *dma_request_chan_by_mask(const dma_cap_mask_t *mask);

To request any channel matching with the requested capabilities, can be
used to request channel for memcpy, memset, xor, etc where no hardware
synchronization is needed.

struct dma_chan *dma_request_chan(struct device *dev, const char *name);
To request a slave channel. The dma_request_chan() will try to find the
channel via DT, ACPI or in case if the kernel booted in non DT/ACPI mode
it will use a filter lookup table and retrieves the needed information from
the dma_slave_map provided by the DMA drivers.
This legacy mode needs changes in platform code, in dmaengine drivers and
finally the dmaengine user drivers can be converted:

For each dmaengine driver an array of DMA device, slave and the parameter
for the filter function needs to be added:

static const struct dma_slave_map da830_edma_map[] = {
	{ "davinci-mcasp.0", "rx", EDMA_FILTER_PARAM(0, 0) },
	{ "davinci-mcasp.0", "tx", EDMA_FILTER_PARAM(0, 1) },
	{ "davinci-mcasp.1", "rx", EDMA_FILTER_PARAM(0, 2) },
	{ "davinci-mcasp.1", "tx", EDMA_FILTER_PARAM(0, 3) },
	{ "davinci-mcasp.2", "rx", EDMA_FILTER_PARAM(0, 4) },
	{ "davinci-mcasp.2", "tx", EDMA_FILTER_PARAM(0, 5) },
	{ "spi_davinci.0", "rx", EDMA_FILTER_PARAM(0, 14) },
	{ "spi_davinci.0", "tx", EDMA_FILTER_PARAM(0, 15) },
	{ "da830-mmc.0", "rx", EDMA_FILTER_PARAM(0, 16) },
	{ "da830-mmc.0", "tx", EDMA_FILTER_PARAM(0, 17) },
	{ "spi_davinci.1", "rx", EDMA_FILTER_PARAM(0, 18) },
	{ "spi_davinci.1", "tx", EDMA_FILTER_PARAM(0, 19) },
};

This information is going to be needed by the dmaengine driver, so
modification to the platform_data is needed, and the driver map should be
added to the pdata of the DMA driver:

da8xx_edma0_pdata.slave_map = da830_edma_map;
da8xx_edma0_pdata.slavecnt = ARRAY_SIZE(da830_edma_map);

The DMA driver then needs to configure the needed device -> filter_fn
mapping before it registers with dma_async_device_register() :

ecc->dma_slave.filter_map.map = info->slave_map;
ecc->dma_slave.filter_map.mapcnt = info->slavecnt;
ecc->dma_slave.filter_map.fn = edma_filter_fn;

When neither DT or ACPI lookup is available the dma_request_chan() will
try to match the requester's device name with the filter_map's list of
device names, when a match found it will use the information from the
dma_slave_map to get the channel with the dma_get_channel() internal
function.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-12-18 11:17:26 +05:30
Lars-Peter Clausen
b1d6ab1aa8 dmaengine: Add might_sleep() to dmaengine_synchronize()
Implementations of dmaengine_synchronize() are allowed to sleep, hence the
function must not be called to from atomic context. Add might_sleep() to
dmaengine_synchronize() to make it easier to detect non-compliant callers.

Suggested-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-12-05 13:43:30 +05:30
Robert Jarzmik
9eeacd3a2f dmaengine: enable DMA_CTRL_REUSE
In the current state, the capability of transfer reuse can neither be
set by a slave dmaengine driver, nor used by a client driver, because
the capability is not available to dma_get_slave_caps().

Fix this by adding a way to declare the capability.

Fixes: 272420214d ("dmaengine: Add DMA_CTRL_REUSE")
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-11-16 08:32:16 +05:30
Lars-Peter Clausen
b36f09c3c4 dmaengine: Add transfer termination synchronization support
The DMAengine API has a long standing race condition that is inherent to
the API itself. Calling dmaengine_terminate_all() is supposed to stop and
abort any pending or active transfers that have previously been submitted.
Unfortunately it is possible that this operation races against a currently
running (or with some drivers also scheduled) completion callback.

Since the API allows dmaengine_terminate_all() to be called from atomic
context as well as from within a completion callback it is not possible to
synchronize to the execution of the completion callback from within
dmaengine_terminate_all() itself.

This means that a user of the DMAengine API does not know when it is safe
to free resources used in the completion callback, which can result in a
use-after-free race condition.

This patch addresses the issue by introducing an explicit synchronization
primitive to the DMAengine API called dmaengine_synchronize().

The existing dmaengine_terminate_all() is deprecated in favor of
dmaengine_terminate_sync() and dmaengine_terminate_async(). The former
aborts all pending and active transfers and synchronizes to the current
context, meaning it will wait until all running completion callbacks have
finished. This means it is only possible to call this function from
non-atomic context. The later function does not synchronize, but can still
be used in atomic context or from within a complete callback. It has to be
followed up by dmaengine_synchronize() before a client can free the
resources used in a completion callback.

In addition to this the semantics of the device_terminate_all() callback
are slightly relaxed by this patch. It is now OK for a driver to only
schedule the termination of the active transfer, but does not necessarily
have to wait until the DMA controller has completely stopped. The driver
must ensure though that the controller has stopped and no longer accesses
any memory when the device_synchronize() callback returns.

This was in part done since most drivers do not pay attention to this
anyway at the moment and to emphasize that this needs to be done when the
device_synchronize() callback is implemented. But it also helps with
implementing support for devices where stopping the controller can require
operations that may sleep.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-11-16 08:28:52 +05:30
Siva Yerramreddy
ff39988abd dma: Add support to program MIC x100 status descriptiors
The MIC X100 DMA engine has a special status descriptor which writes
an 8 byte value to a destination location.  This is used to signal
completion of all DMA descriptors prior to the status descriptor.
This patch add a new DMA engine API which enables updating a
destination address with an 8 byte immediate data value.

Reviewed-by: Nikhil Rao <nikhil.rao@intel.com>
Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Signed-off-by: Lawrynowicz, Jacek <jacek.lawrynowicz@intel.com>
Signed-off-by: Sudeep Dutt <sudeep.dutt@intel.com>
Signed-off-by: Siva Yerramreddy <yshivakrishna@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-10-04 12:46:06 +01:00
Geert Uytterhoeven
7dfffb9541 dmaengine: Stricter legacy checking in dma_request_slave_channel_compat()
dma_request_slave_channel_compat() is meant for drivers that support
both DT and legacy platform device based probing: if DT channel DMA
setup fails, it will fall back to platform data based DMA channel setup,
using hardcoded DMA channel IDs and a filter function.

However, if the DTS doesn't provide a "dmas" property for the device,
the fallback is also used. If the legacy filter function is not
hardcoded in the DMA slave driver, but comes from platform data, it will
be NULL. Then dma_request_slave_channel_compat() will succeed
incorrectly, and return a DMA channel, as a NULL legacy filter function
actually means "all channels are OK", not "do not match".

Later, when trying to use that DMA channel, it will fail with:

    rcar-dmac e6700000.dma-controller: rcar_dmac_prep_slave_sg: bad parameter: len=1, id=-22

To fix this, ensure that both the filter function and the DMA channel ID
are not NULL before using the legacy fallback.

Note that some DMA slave drivers can handle this failure, and will fall
back to PIO.

See also commit 056f6c8702 ("dmaengine: shdma: Make dummy
shdma_chan_filter() always return false"), which fixed the same issue
for the case where shdma_chan_filter() is hardcoded in a DMA slave
driver.

Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-20 12:01:03 +05:30
Jarkko Nikula
1dc0428854 dmaengine: Make __dma_request_slave_channel_compat() name argument constant
Inline function __dma_request_slave_channel_compat() doesn't modify "name"
argument but passes it to dma_request_slave_channel() which already takes
it as a constant.

Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 14:52:10 +05:30
Vinod Koul
272420214d dmaengine: Add DMA_CTRL_REUSE
This adds new descriptor flag for reusing a descriptor by submitting
multiple times by a client, for example video buffer.
Add helper APIs for this as well

Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by:Robert Jarzmik <robert.jarzmik@free.fr>
2015-08-17 14:52:02 +05:30
Maxime Ripard
50c7cd2bd3 dmaengine: Add scatter-gathered memset
The current API allows the driver to accelerate memset by using the DMA
controller.

However, it does so over a contiguous memory area, which might proves
inefficient when you have to do it over a non-contiguous yet repititive
pattern, since you have to create a number of descriptors and then submit
each other.

Add a memset operation going over a scatter list to handle such cases in a
single call.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-06 08:30:56 +05:30
Maxime Ripard
77a68e56aa dmaengine: Add an enum for the dmaengine alignment constraints
Most drivers need to set constraints on the buffer alignment for async tx
operations. However, even though it is documented, some drivers either use
a defined constant that is not matching what the alignment variable expects
(like DMA_BUSWIDTH_* constants) or fill the alignment in bytes instead of
power of two.

Add a new enum for these alignments that matches what the framework
expects, and convert the drivers to it.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-05 10:53:52 +05:30
Vinod Koul
4fb9c15b4f Merge branch 'topic/xdmac' into for-linus 2015-06-25 09:21:49 +05:30