2006-05-23 17:18:44 -07:00
|
|
|
#
|
|
|
|
|
# DMA engine configuration
|
|
|
|
|
#
|
|
|
|
|
|
2007-10-16 01:27:42 -07:00
|
|
|
menuconfig DMADEVICES
|
2007-11-28 16:21:43 -08:00
|
|
|
bool "DMA Engine support"
|
2008-03-01 07:42:48 -07:00
|
|
|
depends on (PCI && X86) || ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX || PPC
|
2008-02-02 19:49:57 -07:00
|
|
|
depends on !HIGHMEM64G
|
2007-10-16 01:27:42 -07:00
|
|
|
help
|
2007-11-28 16:21:43 -08:00
|
|
|
DMA engines can do asynchronous data transfers without
|
|
|
|
|
involving the host CPU. Currently, this framework can be
|
|
|
|
|
used to offload memory copies in the network stack and
|
|
|
|
|
RAID operations in the MD driver.
|
2006-05-23 17:18:44 -07:00
|
|
|
|
2007-10-16 01:27:42 -07:00
|
|
|
if DMADEVICES
|
2006-06-17 21:24:58 -07:00
|
|
|
|
2006-05-23 17:35:34 -07:00
|
|
|
comment "DMA Devices"
|
|
|
|
|
|
|
|
|
|
config INTEL_IOATDMA
|
|
|
|
|
tristate "Intel I/OAT DMA support"
|
2007-10-16 01:27:42 -07:00
|
|
|
depends on PCI && X86
|
|
|
|
|
select DMA_ENGINE
|
|
|
|
|
select DCA
|
|
|
|
|
help
|
|
|
|
|
Enable support for the Intel(R) I/OAT DMA engine present
|
|
|
|
|
in recent Intel Xeon chipsets.
|
|
|
|
|
|
|
|
|
|
Say Y here if you have such a chipset.
|
|
|
|
|
|
|
|
|
|
If unsure, say N.
|
dmaengine: driver for the iop32x, iop33x, and iop13xx raid engines
The Intel(R) IOP series of i/o processors integrate an Xscale core with
raid acceleration engines. The capabilities per platform are:
iop219:
(2) copy engines
iop321:
(2) copy engines
(1) xor and block fill engine
iop33x:
(2) copy and crc32c engines
(1) xor, xor zero sum, pq, pq zero sum, and block fill engine
iop34x (iop13xx):
(2) copy, crc32c, xor, xor zero sum, and block fill engines
(1) copy, crc32c, xor, xor zero sum, pq, pq zero sum, and block fill engine
The driver supports the features of the async_tx api:
* asynchronous notification of operation completion
* implicit (interupt triggered) handling of inter-channel transaction
dependencies
The driver adapts to the platform it is running by two methods.
1/ #include <asm/arch/adma.h> which defines the hardware specific
iop_chan_* and iop_desc_* routines as a series of static inline
functions
2/ The private platform data attached to the platform_device defines the
capabilities of the channels
20070626: Callbacks are run in a tasklet. Given the recent discussion on
LKML about killing tasklets in favor of workqueues I did a quick conversion
of the driver. Raid5 resync performance dropped from 50MB/s to 30MB/s, so
the tasklet implementation remains until a generic softirq interface is
available.
Changelog:
* fixed a slot allocation bug in do_iop13xx_adma_xor that caused too few
slots to be requested eventually leading to data corruption
* enabled the slot allocation routine to attempt to free slots before
returning -ENOMEM
* switched the cleanup routine to solely use the software chain and the
status register to determine if a descriptor is complete. This is
necessary to support other IOP engines that do not have status writeback
capability
* make the driver iop generic
* modified the allocation routines to understand allocating a group of
slots for a single operation
* added a null xor initialization operation for the xor only channel on
iop3xx
* support xor operations on buffers larger than the hardware maximum
* split the do_* routines into separate prep, src/dest set, submit stages
* added async_tx support (dependent operations initiation at cleanup time)
* simplified group handling
* added interrupt support (callbacks via tasklets)
* brought the pending depth inline with ioat (i.e. 4 descriptors)
* drop dma mapping methods, suggested by Chris Leech
* don't use inline in C files, Adrian Bunk
* remove static tasklet declarations
* make iop_adma_alloc_slots easier to read and remove chances for a
corrupted descriptor chain
* fix locking bug in iop_adma_alloc_chan_resources, Benjamin Herrenschmidt
* convert capabilities over to dma_cap_mask_t
* fixup sparse warnings
* add descriptor flush before iop_chan_enable
* checkpatch.pl fixes
* gpl v2 only correction
* move set_src, set_dest, submit to async_tx methods
* move group_list and phys to async_tx
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2007-01-02 13:52:26 -07:00
|
|
|
|
|
|
|
|
config INTEL_IOP_ADMA
|
2007-10-16 01:27:42 -07:00
|
|
|
tristate "Intel IOP ADMA support"
|
|
|
|
|
depends on ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX
|
dmaengine: driver for the iop32x, iop33x, and iop13xx raid engines
The Intel(R) IOP series of i/o processors integrate an Xscale core with
raid acceleration engines. The capabilities per platform are:
iop219:
(2) copy engines
iop321:
(2) copy engines
(1) xor and block fill engine
iop33x:
(2) copy and crc32c engines
(1) xor, xor zero sum, pq, pq zero sum, and block fill engine
iop34x (iop13xx):
(2) copy, crc32c, xor, xor zero sum, and block fill engines
(1) copy, crc32c, xor, xor zero sum, pq, pq zero sum, and block fill engine
The driver supports the features of the async_tx api:
* asynchronous notification of operation completion
* implicit (interupt triggered) handling of inter-channel transaction
dependencies
The driver adapts to the platform it is running by two methods.
1/ #include <asm/arch/adma.h> which defines the hardware specific
iop_chan_* and iop_desc_* routines as a series of static inline
functions
2/ The private platform data attached to the platform_device defines the
capabilities of the channels
20070626: Callbacks are run in a tasklet. Given the recent discussion on
LKML about killing tasklets in favor of workqueues I did a quick conversion
of the driver. Raid5 resync performance dropped from 50MB/s to 30MB/s, so
the tasklet implementation remains until a generic softirq interface is
available.
Changelog:
* fixed a slot allocation bug in do_iop13xx_adma_xor that caused too few
slots to be requested eventually leading to data corruption
* enabled the slot allocation routine to attempt to free slots before
returning -ENOMEM
* switched the cleanup routine to solely use the software chain and the
status register to determine if a descriptor is complete. This is
necessary to support other IOP engines that do not have status writeback
capability
* make the driver iop generic
* modified the allocation routines to understand allocating a group of
slots for a single operation
* added a null xor initialization operation for the xor only channel on
iop3xx
* support xor operations on buffers larger than the hardware maximum
* split the do_* routines into separate prep, src/dest set, submit stages
* added async_tx support (dependent operations initiation at cleanup time)
* simplified group handling
* added interrupt support (callbacks via tasklets)
* brought the pending depth inline with ioat (i.e. 4 descriptors)
* drop dma mapping methods, suggested by Chris Leech
* don't use inline in C files, Adrian Bunk
* remove static tasklet declarations
* make iop_adma_alloc_slots easier to read and remove chances for a
corrupted descriptor chain
* fix locking bug in iop_adma_alloc_chan_resources, Benjamin Herrenschmidt
* convert capabilities over to dma_cap_mask_t
* fixup sparse warnings
* add descriptor flush before iop_chan_enable
* checkpatch.pl fixes
* gpl v2 only correction
* move set_src, set_dest, submit to async_tx methods
* move group_list and phys to async_tx
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2007-01-02 13:52:26 -07:00
|
|
|
select ASYNC_CORE
|
2007-10-16 01:27:42 -07:00
|
|
|
select DMA_ENGINE
|
|
|
|
|
help
|
|
|
|
|
Enable support for the Intel(R) IOP Series RAID engines.
|
dmaengine: driver for the iop32x, iop33x, and iop13xx raid engines
The Intel(R) IOP series of i/o processors integrate an Xscale core with
raid acceleration engines. The capabilities per platform are:
iop219:
(2) copy engines
iop321:
(2) copy engines
(1) xor and block fill engine
iop33x:
(2) copy and crc32c engines
(1) xor, xor zero sum, pq, pq zero sum, and block fill engine
iop34x (iop13xx):
(2) copy, crc32c, xor, xor zero sum, and block fill engines
(1) copy, crc32c, xor, xor zero sum, pq, pq zero sum, and block fill engine
The driver supports the features of the async_tx api:
* asynchronous notification of operation completion
* implicit (interupt triggered) handling of inter-channel transaction
dependencies
The driver adapts to the platform it is running by two methods.
1/ #include <asm/arch/adma.h> which defines the hardware specific
iop_chan_* and iop_desc_* routines as a series of static inline
functions
2/ The private platform data attached to the platform_device defines the
capabilities of the channels
20070626: Callbacks are run in a tasklet. Given the recent discussion on
LKML about killing tasklets in favor of workqueues I did a quick conversion
of the driver. Raid5 resync performance dropped from 50MB/s to 30MB/s, so
the tasklet implementation remains until a generic softirq interface is
available.
Changelog:
* fixed a slot allocation bug in do_iop13xx_adma_xor that caused too few
slots to be requested eventually leading to data corruption
* enabled the slot allocation routine to attempt to free slots before
returning -ENOMEM
* switched the cleanup routine to solely use the software chain and the
status register to determine if a descriptor is complete. This is
necessary to support other IOP engines that do not have status writeback
capability
* make the driver iop generic
* modified the allocation routines to understand allocating a group of
slots for a single operation
* added a null xor initialization operation for the xor only channel on
iop3xx
* support xor operations on buffers larger than the hardware maximum
* split the do_* routines into separate prep, src/dest set, submit stages
* added async_tx support (dependent operations initiation at cleanup time)
* simplified group handling
* added interrupt support (callbacks via tasklets)
* brought the pending depth inline with ioat (i.e. 4 descriptors)
* drop dma mapping methods, suggested by Chris Leech
* don't use inline in C files, Adrian Bunk
* remove static tasklet declarations
* make iop_adma_alloc_slots easier to read and remove chances for a
corrupted descriptor chain
* fix locking bug in iop_adma_alloc_chan_resources, Benjamin Herrenschmidt
* convert capabilities over to dma_cap_mask_t
* fixup sparse warnings
* add descriptor flush before iop_chan_enable
* checkpatch.pl fixes
* gpl v2 only correction
* move set_src, set_dest, submit to async_tx methods
* move group_list and phys to async_tx
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2007-01-02 13:52:26 -07:00
|
|
|
|
2008-03-01 07:42:48 -07:00
|
|
|
config FSL_DMA
|
|
|
|
|
bool "Freescale MPC85xx/MPC83xx DMA support"
|
|
|
|
|
depends on PPC
|
|
|
|
|
select DMA_ENGINE
|
|
|
|
|
---help---
|
|
|
|
|
Enable support for the Freescale DMA engine. Now, it support
|
|
|
|
|
MPC8560/40, MPC8555, MPC8548 and MPC8641 processors.
|
|
|
|
|
The MPC8349, MPC8360 is also supported.
|
|
|
|
|
|
2007-10-16 01:27:42 -07:00
|
|
|
config DMA_ENGINE
|
|
|
|
|
bool
|
|
|
|
|
|
|
|
|
|
comment "DMA Clients"
|
|
|
|
|
depends on DMA_ENGINE
|
|
|
|
|
|
|
|
|
|
config NET_DMA
|
|
|
|
|
bool "Network: TCP receive copy offload"
|
|
|
|
|
depends on DMA_ENGINE && NET
|
|
|
|
|
help
|
|
|
|
|
This enables the use of DMA engines in the network stack to
|
|
|
|
|
offload receive copy-to-user operations, freeing CPU cycles.
|
|
|
|
|
Since this is the main user of the DMA engine, it should be enabled;
|
|
|
|
|
say Y here.
|
|
|
|
|
|
|
|
|
|
endif
|