Merge branch 'linux-2.6' into for-2.6.24

This commit is contained in:
Paul Mackerras
2007-10-03 15:33:17 +10:00
170 changed files with 2331 additions and 1607 deletions
+219
View File
@@ -0,0 +1,219 @@
Asynchronous Transfers/Transforms API
1 INTRODUCTION
2 GENEALOGY
3 USAGE
3.1 General format of the API
3.2 Supported operations
3.3 Descriptor management
3.4 When does the operation execute?
3.5 When does the operation complete?
3.6 Constraints
3.7 Example
4 DRIVER DEVELOPER NOTES
4.1 Conformance points
4.2 "My application needs finer control of hardware channels"
5 SOURCE
---
1 INTRODUCTION
The async_tx API provides methods for describing a chain of asynchronous
bulk memory transfers/transforms with support for inter-transactional
dependencies. It is implemented as a dmaengine client that smooths over
the details of different hardware offload engine implementations. Code
that is written to the API can optimize for asynchronous operation and
the API will fit the chain of operations to the available offload
resources.
2 GENEALOGY
The API was initially designed to offload the memory copy and
xor-parity-calculations of the md-raid5 driver using the offload engines
present in the Intel(R) Xscale series of I/O processors. It also built
on the 'dmaengine' layer developed for offloading memory copies in the
network stack using Intel(R) I/OAT engines. The following design
features surfaced as a result:
1/ implicit synchronous path: users of the API do not need to know if
the platform they are running on has offload capabilities. The
operation will be offloaded when an engine is available and carried out
in software otherwise.
2/ cross channel dependency chains: the API allows a chain of dependent
operations to be submitted, like xor->copy->xor in the raid5 case. The
API automatically handles cases where the transition from one operation
to another implies a hardware channel switch.
3/ dmaengine extensions to support multiple clients and operation types
beyond 'memcpy'
3 USAGE
3.1 General format of the API:
struct dma_async_tx_descriptor *
async_<operation>(<op specific parameters>,
enum async_tx_flags flags,
struct dma_async_tx_descriptor *dependency,
dma_async_tx_callback callback_routine,
void *callback_parameter);
3.2 Supported operations:
memcpy - memory copy between a source and a destination buffer
memset - fill a destination buffer with a byte value
xor - xor a series of source buffers and write the result to a
destination buffer
xor_zero_sum - xor a series of source buffers and set a flag if the
result is zero. The implementation attempts to prevent
writes to memory
3.3 Descriptor management:
The return value is non-NULL and points to a 'descriptor' when the operation
has been queued to execute asynchronously. Descriptors are recycled
resources, under control of the offload engine driver, to be reused as
operations complete. When an application needs to submit a chain of
operations it must guarantee that the descriptor is not automatically recycled
before the dependency is submitted. This requires that all descriptors be
acknowledged by the application before the offload engine driver is allowed to
recycle (or free) the descriptor. A descriptor can be acked by one of the
following methods:
1/ setting the ASYNC_TX_ACK flag if no child operations are to be submitted
2/ setting the ASYNC_TX_DEP_ACK flag to acknowledge the parent
descriptor of a new operation.
3/ calling async_tx_ack() on the descriptor.
3.4 When does the operation execute?
Operations do not immediately issue after return from the
async_<operation> call. Offload engine drivers batch operations to
improve performance by reducing the number of mmio cycles needed to
manage the channel. Once a driver-specific threshold is met the driver
automatically issues pending operations. An application can force this
event by calling async_tx_issue_pending_all(). This operates on all
channels since the application has no knowledge of channel to operation
mapping.
3.5 When does the operation complete?
There are two methods for an application to learn about the completion
of an operation.
1/ Call dma_wait_for_async_tx(). This call causes the CPU to spin while
it polls for the completion of the operation. It handles dependency
chains and issuing pending operations.
2/ Specify a completion callback. The callback routine runs in tasklet
context if the offload engine driver supports interrupts, or it is
called in application context if the operation is carried out
synchronously in software. The callback can be set in the call to
async_<operation>, or when the application needs to submit a chain of
unknown length it can use the async_trigger_callback() routine to set a
completion interrupt/callback at the end of the chain.
3.6 Constraints:
1/ Calls to async_<operation> are not permitted in IRQ context. Other
contexts are permitted provided constraint #2 is not violated.
2/ Completion callback routines cannot submit new operations. This
results in recursion in the synchronous case and spin_locks being
acquired twice in the asynchronous case.
3.7 Example:
Perform a xor->copy->xor operation where each operation depends on the
result from the previous operation:
void complete_xor_copy_xor(void *param)
{
printk("complete\n");
}
int run_xor_copy_xor(struct page **xor_srcs,
int xor_src_cnt,
struct page *xor_dest,
size_t xor_len,
struct page *copy_src,
struct page *copy_dest,
size_t copy_len)
{
struct dma_async_tx_descriptor *tx;
tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len,
ASYNC_TX_XOR_DROP_DST, NULL, NULL, NULL);
tx = async_memcpy(copy_dest, copy_src, 0, 0, copy_len,
ASYNC_TX_DEP_ACK, tx, NULL, NULL);
tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len,
ASYNC_TX_XOR_DROP_DST | ASYNC_TX_DEP_ACK | ASYNC_TX_ACK,
tx, complete_xor_copy_xor, NULL);
async_tx_issue_pending_all();
}
See include/linux/async_tx.h for more information on the flags. See the
ops_run_* and ops_complete_* routines in drivers/md/raid5.c for more
implementation examples.
4 DRIVER DEVELOPMENT NOTES
4.1 Conformance points:
There are a few conformance points required in dmaengine drivers to
accommodate assumptions made by applications using the async_tx API:
1/ Completion callbacks are expected to happen in tasklet context
2/ dma_async_tx_descriptor fields are never manipulated in IRQ context
3/ Use async_tx_run_dependencies() in the descriptor clean up path to
handle submission of dependent operations
4.2 "My application needs finer control of hardware channels"
This requirement seems to arise from cases where a DMA engine driver is
trying to support device-to-memory DMA. The dmaengine and async_tx
implementations were designed for offloading memory-to-memory
operations; however, there are some capabilities of the dmaengine layer
that can be used for platform-specific channel management.
Platform-specific constraints can be handled by registering the
application as a 'dma_client' and implementing a 'dma_event_callback' to
apply a filter to the available channels in the system. Before showing
how to implement a custom dma_event callback some background of
dmaengine's client support is required.
The following routines in dmaengine support multiple clients requesting
use of a channel:
- dma_async_client_register(struct dma_client *client)
- dma_async_client_chan_request(struct dma_client *client)
dma_async_client_register takes a pointer to an initialized dma_client
structure. It expects that the 'event_callback' and 'cap_mask' fields
are already initialized.
dma_async_client_chan_request triggers dmaengine to notify the client of
all channels that satisfy the capability mask. It is up to the client's
event_callback routine to track how many channels the client needs and
how many it is currently using. The dma_event_callback routine returns a
dma_state_client code to let dmaengine know the status of the
allocation.
Below is the example of how to extend this functionality for
platform-specific filtering of the available channels beyond the
standard capability mask:
static enum dma_state_client
my_dma_client_callback(struct dma_client *client,
struct dma_chan *chan, enum dma_state state)
{
struct dma_device *dma_dev;
struct my_platform_specific_dma *plat_dma_dev;
dma_dev = chan->device;
plat_dma_dev = container_of(dma_dev,
struct my_platform_specific_dma,
dma_dev);
if (!plat_dma_dev->platform_specific_capability)
return DMA_DUP;
. . .
}
5 SOURCE
include/linux/dmaengine.h: core header file for DMA drivers and clients
drivers/dma/dmaengine.c: offload engine channel management routines
drivers/dma/: location for offload engine drivers
include/linux/async_tx.h: core header file for the async_tx api
crypto/async_tx/async_tx.c: async_tx interface to dmaengine and common code
crypto/async_tx/async_memcpy.c: copy offload
crypto/async_tx/async_memset.c: memory fill offload
crypto/async_tx/async_xor.c: xor and xor zero sum offload
+2
View File
@@ -94,6 +94,8 @@ Your cooperation is appreciated.
9 = /dev/urandom Faster, less secure random number gen. 9 = /dev/urandom Faster, less secure random number gen.
10 = /dev/aio Asynchronous I/O notification interface 10 = /dev/aio Asynchronous I/O notification interface
11 = /dev/kmsg Writes to this come out as printk's 11 = /dev/kmsg Writes to this come out as printk's
12 = /dev/oldmem Used by crashdump kernels to access
the memory of the kernel that crashed.
1 block RAM disk 1 block RAM disk
0 = /dev/ram0 First RAM disk 0 = /dev/ram0 First RAM disk
File diff suppressed because it is too large Load Diff
+1 -1
View File
@@ -882,7 +882,7 @@ static u32 handle_block_output(int fd, const struct iovec *iov,
* of the block file (possibly extending it). */ * of the block file (possibly extending it). */
if (off + len > device_len) { if (off + len > device_len) {
/* Trim it back to the correct length */ /* Trim it back to the correct length */
ftruncate(dev->fd, device_len); ftruncate64(dev->fd, device_len);
/* Die, bad Guest, die. */ /* Die, bad Guest, die. */
errx(1, "Write past end %llu+%u", off, len); errx(1, "Write past end %llu+%u", off, len);
} }
+3 -3
View File
@@ -2624,8 +2624,8 @@ P: Harald Welte
P: Jozsef Kadlecsik P: Jozsef Kadlecsik
P: Patrick McHardy P: Patrick McHardy
M: kaber@trash.net M: kaber@trash.net
L: netfilter-devel@lists.netfilter.org L: netfilter-devel@vger.kernel.org
L: netfilter@lists.netfilter.org (subscribers-only) L: netfilter@vger.kernel.org
L: coreteam@netfilter.org L: coreteam@netfilter.org
W: http://www.netfilter.org/ W: http://www.netfilter.org/
W: http://www.iptables.org/ W: http://www.iptables.org/
@@ -2678,7 +2678,7 @@ M: jmorris@namei.org
P: Hideaki YOSHIFUJI P: Hideaki YOSHIFUJI
M: yoshfuji@linux-ipv6.org M: yoshfuji@linux-ipv6.org
P: Patrick McHardy P: Patrick McHardy
M: kaber@coreworks.de M: kaber@trash.net
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
T: git kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6.git T: git kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6.git
S: Maintained S: Maintained
+2 -2
View File
@@ -1,8 +1,8 @@
VERSION = 2 VERSION = 2
PATCHLEVEL = 6 PATCHLEVEL = 6
SUBLEVEL = 23 SUBLEVEL = 23
EXTRAVERSION =-rc6 EXTRAVERSION =-rc9
NAME = Pink Farting Weasel NAME = Arr Matey! A Hairy Bilge Rat!
# *DOCUMENTATION* # *DOCUMENTATION*
# To see a list of typical targets execute "make help" # To see a list of typical targets execute "make help"
+2 -2
View File
@@ -338,7 +338,7 @@ pbus_assign_bus_resources(struct pci_bus *bus, struct pci_sys_data *root)
* pcibios_fixup_bus - Called after each bus is probed, * pcibios_fixup_bus - Called after each bus is probed,
* but before its children are examined. * but before its children are examined.
*/ */
void __devinit pcibios_fixup_bus(struct pci_bus *bus) void pcibios_fixup_bus(struct pci_bus *bus)
{ {
struct pci_sys_data *root = bus->sysdata; struct pci_sys_data *root = bus->sysdata;
struct pci_dev *dev; struct pci_dev *dev;
@@ -419,7 +419,7 @@ void __devinit pcibios_fixup_bus(struct pci_bus *bus)
/* /*
* Convert from Linux-centric to bus-centric addresses for bridge devices. * Convert from Linux-centric to bus-centric addresses for bridge devices.
*/ */
void __devinit void
pcibios_resource_to_bus(struct pci_dev *dev, struct pci_bus_region *region, pcibios_resource_to_bus(struct pci_dev *dev, struct pci_bus_region *region,
struct resource *res) struct resource *res)
{ {
+1 -1
View File
@@ -336,7 +336,7 @@ static int ep93xx_gpio_irq_type(unsigned int irq, unsigned int type)
if (line >= 0 && line < 16) { if (line >= 0 && line < 16) {
gpio_line_config(line, GPIO_IN); gpio_line_config(line, GPIO_IN);
} else { } else {
gpio_line_config(EP93XX_GPIO_LINE_F(line), GPIO_IN); gpio_line_config(EP93XX_GPIO_LINE_F(line-16), GPIO_IN);
} }
port = line >> 3; port = line >> 3;
+11 -1
View File
@@ -57,7 +57,17 @@ static void l2x0_inv_range(unsigned long start, unsigned long end)
{ {
unsigned long addr; unsigned long addr;
start &= ~(CACHE_LINE_SIZE - 1); if (start & (CACHE_LINE_SIZE - 1)) {
start &= ~(CACHE_LINE_SIZE - 1);
sync_writel(start, L2X0_CLEAN_INV_LINE_PA, 1);
start += CACHE_LINE_SIZE;
}
if (end & (CACHE_LINE_SIZE - 1)) {
end &= ~(CACHE_LINE_SIZE - 1);
sync_writel(end, L2X0_CLEAN_INV_LINE_PA, 1);
}
for (addr = start; addr < end; addr += CACHE_LINE_SIZE) for (addr = start; addr < end; addr += CACHE_LINE_SIZE)
sync_writel(addr, L2X0_INV_LINE_PA, 1); sync_writel(addr, L2X0_INV_LINE_PA, 1);
cache_sync(); cache_sync();
+1 -1
View File
@@ -275,7 +275,7 @@ die:
hlt hlt
jmp die jmp die
.size die, .-due .size die, .-die
.section ".initdata", "a" .section ".initdata", "a"
setup_corrupt: setup_corrupt:
+31 -12
View File
@@ -20,6 +20,7 @@
static int detect_memory_e820(void) static int detect_memory_e820(void)
{ {
int count = 0;
u32 next = 0; u32 next = 0;
u32 size, id; u32 size, id;
u8 err; u8 err;
@@ -27,20 +28,33 @@ static int detect_memory_e820(void)
do { do {
size = sizeof(struct e820entry); size = sizeof(struct e820entry);
id = SMAP;
asm("int $0x15; setc %0"
: "=am" (err), "+b" (next), "+d" (id), "+c" (size),
"=m" (*desc)
: "D" (desc), "a" (0xe820));
if (err || id != SMAP) /* Important: %edx is clobbered by some BIOSes,
so it must be either used for the error output
or explicitly marked clobbered. */
asm("int $0x15; setc %0"
: "=d" (err), "+b" (next), "=a" (id), "+c" (size),
"=m" (*desc)
: "D" (desc), "d" (SMAP), "a" (0xe820));
/* Some BIOSes stop returning SMAP in the middle of
the search loop. We don't know exactly how the BIOS
screwed up the map at that point, we might have a
partial map, the full map, or complete garbage, so
just return failure. */
if (id != SMAP) {
count = 0;
break;
}
if (err)
break; break;
boot_params.e820_entries++; count++;
desc++; desc++;
} while (next && boot_params.e820_entries < E820MAX); } while (next && count < E820MAX);
return boot_params.e820_entries; return boot_params.e820_entries = count;
} }
static int detect_memory_e801(void) static int detect_memory_e801(void)
@@ -89,11 +103,16 @@ static int detect_memory_88(void)
int detect_memory(void) int detect_memory(void)
{ {
int err = -1;
if (detect_memory_e820() > 0) if (detect_memory_e820() > 0)
return 0; err = 0;
if (!detect_memory_e801()) if (!detect_memory_e801())
return 0; err = 0;
return detect_memory_88(); if (!detect_memory_88())
err = 0;
return err;
} }
+10 -4
View File
@@ -147,7 +147,7 @@ int mode_defined(u16 mode)
} }
/* Set mode (without recalc) */ /* Set mode (without recalc) */
static int raw_set_mode(u16 mode) static int raw_set_mode(u16 mode, u16 *real_mode)
{ {
int nmode, i; int nmode, i;
struct card_info *card; struct card_info *card;
@@ -165,8 +165,10 @@ static int raw_set_mode(u16 mode)
if ((mode == nmode && visible) || if ((mode == nmode && visible) ||
mode == mi->mode || mode == mi->mode ||
mode == (mi->y << 8)+mi->x) mode == (mi->y << 8)+mi->x) {
*real_mode = mi->mode;
return card->set_mode(mi); return card->set_mode(mi);
}
if (visible) if (visible)
nmode++; nmode++;
@@ -178,7 +180,7 @@ static int raw_set_mode(u16 mode)
if (mode >= card->xmode_first && if (mode >= card->xmode_first &&
mode < card->xmode_first+card->xmode_n) { mode < card->xmode_first+card->xmode_n) {
struct mode_info mix; struct mode_info mix;
mix.mode = mode; *real_mode = mix.mode = mode;
mix.x = mix.y = 0; mix.x = mix.y = 0;
return card->set_mode(&mix); return card->set_mode(&mix);
} }
@@ -223,6 +225,7 @@ static void vga_recalc_vertical(void)
static int set_mode(u16 mode) static int set_mode(u16 mode)
{ {
int rv; int rv;
u16 real_mode;
/* Very special mode numbers... */ /* Very special mode numbers... */
if (mode == VIDEO_CURRENT_MODE) if (mode == VIDEO_CURRENT_MODE)
@@ -232,13 +235,16 @@ static int set_mode(u16 mode)
else if (mode == EXTENDED_VGA) else if (mode == EXTENDED_VGA)
mode = VIDEO_8POINT; mode = VIDEO_8POINT;
rv = raw_set_mode(mode); rv = raw_set_mode(mode, &real_mode);
if (rv) if (rv)
return rv; return rv;
if (mode & VIDEO_RECALC) if (mode & VIDEO_RECALC)
vga_recalc_vertical(); vga_recalc_vertical();
/* Save the canonical mode number for the kernel, not
an alias, size specification or menu position */
boot_params.hdr.vid_mode = real_mode;
return 0; return 0;
} }
+10 -31
View File
@@ -151,51 +151,30 @@ bogus_real_magic:
#define VIDEO_FIRST_V7 0x0900 #define VIDEO_FIRST_V7 0x0900
# Setting of user mode (AX=mode ID) => CF=success # Setting of user mode (AX=mode ID) => CF=success
# For now, we only handle VESA modes (0x0200..0x03ff). To handle other
# modes, we should probably compile in the video code from the boot
# directory.
mode_set: mode_set:
movw %ax, %bx movw %ax, %bx
#if 0 subb $VIDEO_FIRST_VESA>>8, %bh
cmpb $0xff, %ah cmpb $2, %bh
jz setalias jb check_vesa
testb $VIDEO_RECALC>>8, %ah setbad:
jnz _setrec clc
cmpb $VIDEO_FIRST_RESOLUTION>>8, %ah
jnc setres
cmpb $VIDEO_FIRST_SPECIAL>>8, %ah
jz setspc
cmpb $VIDEO_FIRST_V7>>8, %ah
jz setv7
#endif
cmpb $VIDEO_FIRST_VESA>>8, %ah
jnc check_vesa
#if 0
orb %ah, %ah
jz setmenu
#endif
decb %ah
# jz setbios Add bios modes later
setbad: clc
ret ret
check_vesa: check_vesa:
subb $VIDEO_FIRST_VESA>>8, %bh
orw $0x4000, %bx # Use linear frame buffer orw $0x4000, %bx # Use linear frame buffer
movw $0x4f02, %ax # VESA BIOS mode set call movw $0x4f02, %ax # VESA BIOS mode set call
int $0x10 int $0x10
cmpw $0x004f, %ax # AL=4f if implemented cmpw $0x004f, %ax # AL=4f if implemented
jnz _setbad # AH=0 if OK jnz setbad # AH=0 if OK
stc stc
ret ret
_setbad: jmp setbad
.code32 .code32
ALIGN ALIGN
+4 -1
View File
@@ -559,6 +559,9 @@ void xen_exit_mmap(struct mm_struct *mm)
put_cpu(); put_cpu();
spin_lock(&mm->page_table_lock); spin_lock(&mm->page_table_lock);
xen_pgd_unpin(mm->pgd);
/* pgd may not be pinned in the error exit path of execve */
if (PagePinned(virt_to_page(mm->pgd)))
xen_pgd_unpin(mm->pgd);
spin_unlock(&mm->page_table_lock); spin_unlock(&mm->page_table_lock);
} }
+1 -4
View File
@@ -177,10 +177,7 @@ handle_real_irq:
outb(cached_master_mask, PIC_MASTER_IMR); outb(cached_master_mask, PIC_MASTER_IMR);
outb(0x60+irq,PIC_MASTER_CMD); /* 'Specific EOI to master */ outb(0x60+irq,PIC_MASTER_CMD); /* 'Specific EOI to master */
} }
#ifdef CONFIG_MIPS_MT_SMTC smtc_im_ack_irq(irq);
if (irq_hwmask[irq] & ST0_IM)
set_c0_status(irq_hwmask[irq] & ST0_IM);
#endif /* CONFIG_MIPS_MT_SMTC */
spin_unlock_irqrestore(&i8259A_lock, flags); spin_unlock_irqrestore(&i8259A_lock, flags);
return; return;
+2 -8
View File
@@ -52,11 +52,8 @@ static void level_mask_and_ack_msc_irq(unsigned int irq)
mask_msc_irq(irq); mask_msc_irq(irq);
if (!cpu_has_veic) if (!cpu_has_veic)
MSCIC_WRITE(MSC01_IC_EOI, 0); MSCIC_WRITE(MSC01_IC_EOI, 0);
#ifdef CONFIG_MIPS_MT_SMTC
/* This actually needs to be a call into platform code */ /* This actually needs to be a call into platform code */
if (irq_hwmask[irq] & ST0_IM) smtc_im_ack_irq(irq);
set_c0_status(irq_hwmask[irq] & ST0_IM);
#endif /* CONFIG_MIPS_MT_SMTC */
} }
/* /*
@@ -73,10 +70,7 @@ static void edge_mask_and_ack_msc_irq(unsigned int irq)
MSCIC_WRITE(MSC01_IC_SUP+irq*8, r | ~MSC01_IC_SUP_EDGE_BIT); MSCIC_WRITE(MSC01_IC_SUP+irq*8, r | ~MSC01_IC_SUP_EDGE_BIT);
MSCIC_WRITE(MSC01_IC_SUP+irq*8, r); MSCIC_WRITE(MSC01_IC_SUP+irq*8, r);
} }
#ifdef CONFIG_MIPS_MT_SMTC smtc_im_ack_irq(irq);
if (irq_hwmask[irq] & ST0_IM)
set_c0_status(irq_hwmask[irq] & ST0_IM);
#endif /* CONFIG_MIPS_MT_SMTC */
} }
/* /*
+1 -9
View File
@@ -74,20 +74,12 @@ EXPORT_SYMBOL_GPL(free_irqno);
*/ */
void ack_bad_irq(unsigned int irq) void ack_bad_irq(unsigned int irq)
{ {
smtc_im_ack_irq(irq);
printk("unexpected IRQ # %d\n", irq); printk("unexpected IRQ # %d\n", irq);
} }
atomic_t irq_err_count; atomic_t irq_err_count;
#ifdef CONFIG_MIPS_MT_SMTC
/*
* SMTC Kernel needs to manipulate low-level CPU interrupt mask
* in do_IRQ. These are passed in setup_irq_smtc() and stored
* in this table.
*/
unsigned long irq_hwmask[NR_IRQS];
#endif /* CONFIG_MIPS_MT_SMTC */
/* /*
* Generic, controller-independent functions: * Generic, controller-independent functions:
*/ */
+1 -1
View File
@@ -525,5 +525,5 @@ sys_call_table:
PTR compat_sys_signalfd PTR compat_sys_signalfd
PTR compat_sys_timerfd PTR compat_sys_timerfd
PTR sys_eventfd PTR sys_eventfd
PTR sys_fallocate /* 4320 */ PTR sys32_fallocate /* 4320 */
.size sys_call_table,.-sys_call_table .size sys_call_table,.-sys_call_table
+4 -1
View File
@@ -25,8 +25,11 @@
#include <asm/smtc_proc.h> #include <asm/smtc_proc.h>
/* /*
* This file should be built into the kernel only if CONFIG_MIPS_MT_SMTC is set. * SMTC Kernel needs to manipulate low-level CPU interrupt mask
* in do_IRQ. These are passed in setup_irq_smtc() and stored
* in this table.
*/ */
unsigned long irq_hwmask[NR_IRQS];
#define LOCK_MT_PRA() \ #define LOCK_MT_PRA() \
local_irq_save(flags); \ local_irq_save(flags); \
+2
View File
@@ -45,6 +45,8 @@ SECTIONS
__dbe_table : { *(__dbe_table) } __dbe_table : { *(__dbe_table) }
__stop___dbe_table = .; __stop___dbe_table = .;
NOTES
RODATA RODATA
/* writeable */ /* writeable */

Some files were not shown because too many files have changed in this diff Show More