You've already forked linux-apfs
mirror of
https://github.com/linux-apfs/linux-apfs.git
synced 2026-05-01 15:00:59 -07:00
Merge branch 'core/mutexes' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into drm-next
Merge in the tip core/mutexes branch for future GPU driver use. Ingo will send this branch to Linus prior to drm-next. * 'core/mutexes' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (24 commits) locking-selftests: Handle unexpected failures more strictly mutex: Add more w/w tests to test EDEADLK path handling mutex: Add more tests to lib/locking-selftest.c mutex: Add w/w tests to lib/locking-selftest.c mutex: Add w/w mutex slowpath debugging mutex: Add support for wound/wait style locks arch: Make __mutex_fastpath_lock_retval return whether fastpath succeeded or not powerpc/pci: Fix boot panic on mpc83xx (regression) s390/ipl: Fix FCP WWPN and LUN format strings for read fs: fix new splice.c kernel-doc warning spi/pxa2xx: fix memory corruption due to wrong size used in devm_kzalloc() s390/mem_detect: fix memory hole handling s390/dma: support debug_dma_mapping_error s390/dma: fix mapping_error detection s390/irq: Only define synchronize_irq() on SMP Input: xpad - fix for "Mad Catz Street Fighter IV FightPad" controllers Input: wacom - add a new stylus (0x100802) for Intuos5 and Cintiqs spi/pxa2xx: use GFP_ATOMIC in sg table allocation fuse: hold i_mutex in fuse_file_fallocate() Input: add missing dependencies on CONFIG_HAS_IOMEM ...
This commit is contained in:
@@ -0,0 +1,344 @@
|
||||
Wait/Wound Deadlock-Proof Mutex Design
|
||||
======================================
|
||||
|
||||
Please read mutex-design.txt first, as it applies to wait/wound mutexes too.
|
||||
|
||||
Motivation for WW-Mutexes
|
||||
-------------------------
|
||||
|
||||
GPU's do operations that commonly involve many buffers. Those buffers
|
||||
can be shared across contexts/processes, exist in different memory
|
||||
domains (for example VRAM vs system memory), and so on. And with
|
||||
PRIME / dmabuf, they can even be shared across devices. So there are
|
||||
a handful of situations where the driver needs to wait for buffers to
|
||||
become ready. If you think about this in terms of waiting on a buffer
|
||||
mutex for it to become available, this presents a problem because
|
||||
there is no way to guarantee that buffers appear in a execbuf/batch in
|
||||
the same order in all contexts. That is directly under control of
|
||||
userspace, and a result of the sequence of GL calls that an application
|
||||
makes. Which results in the potential for deadlock. The problem gets
|
||||
more complex when you consider that the kernel may need to migrate the
|
||||
buffer(s) into VRAM before the GPU operates on the buffer(s), which
|
||||
may in turn require evicting some other buffers (and you don't want to
|
||||
evict other buffers which are already queued up to the GPU), but for a
|
||||
simplified understanding of the problem you can ignore this.
|
||||
|
||||
The algorithm that the TTM graphics subsystem came up with for dealing with
|
||||
this problem is quite simple. For each group of buffers (execbuf) that need
|
||||
to be locked, the caller would be assigned a unique reservation id/ticket,
|
||||
from a global counter. In case of deadlock while locking all the buffers
|
||||
associated with a execbuf, the one with the lowest reservation ticket (i.e.
|
||||
the oldest task) wins, and the one with the higher reservation id (i.e. the
|
||||
younger task) unlocks all of the buffers that it has already locked, and then
|
||||
tries again.
|
||||
|
||||
In the RDBMS literature this deadlock handling approach is called wait/wound:
|
||||
The older tasks waits until it can acquire the contended lock. The younger tasks
|
||||
needs to back off and drop all the locks it is currently holding, i.e. the
|
||||
younger task is wounded.
|
||||
|
||||
Concepts
|
||||
--------
|
||||
|
||||
Compared to normal mutexes two additional concepts/objects show up in the lock
|
||||
interface for w/w mutexes:
|
||||
|
||||
Acquire context: To ensure eventual forward progress it is important the a task
|
||||
trying to acquire locks doesn't grab a new reservation id, but keeps the one it
|
||||
acquired when starting the lock acquisition. This ticket is stored in the
|
||||
acquire context. Furthermore the acquire context keeps track of debugging state
|
||||
to catch w/w mutex interface abuse.
|
||||
|
||||
W/w class: In contrast to normal mutexes the lock class needs to be explicit for
|
||||
w/w mutexes, since it is required to initialize the acquire context.
|
||||
|
||||
Furthermore there are three different class of w/w lock acquire functions:
|
||||
|
||||
* Normal lock acquisition with a context, using ww_mutex_lock.
|
||||
|
||||
* Slowpath lock acquisition on the contending lock, used by the wounded task
|
||||
after having dropped all already acquired locks. These functions have the
|
||||
_slow postfix.
|
||||
|
||||
From a simple semantics point-of-view the _slow functions are not strictly
|
||||
required, since simply calling the normal ww_mutex_lock functions on the
|
||||
contending lock (after having dropped all other already acquired locks) will
|
||||
work correctly. After all if no other ww mutex has been acquired yet there's
|
||||
no deadlock potential and hence the ww_mutex_lock call will block and not
|
||||
prematurely return -EDEADLK. The advantage of the _slow functions is in
|
||||
interface safety:
|
||||
- ww_mutex_lock has a __must_check int return type, whereas ww_mutex_lock_slow
|
||||
has a void return type. Note that since ww mutex code needs loops/retries
|
||||
anyway the __must_check doesn't result in spurious warnings, even though the
|
||||
very first lock operation can never fail.
|
||||
- When full debugging is enabled ww_mutex_lock_slow checks that all acquired
|
||||
ww mutex have been released (preventing deadlocks) and makes sure that we
|
||||
block on the contending lock (preventing spinning through the -EDEADLK
|
||||
slowpath until the contended lock can be acquired).
|
||||
|
||||
* Functions to only acquire a single w/w mutex, which results in the exact same
|
||||
semantics as a normal mutex. This is done by calling ww_mutex_lock with a NULL
|
||||
context.
|
||||
|
||||
Again this is not strictly required. But often you only want to acquire a
|
||||
single lock in which case it's pointless to set up an acquire context (and so
|
||||
better to avoid grabbing a deadlock avoidance ticket).
|
||||
|
||||
Of course, all the usual variants for handling wake-ups due to signals are also
|
||||
provided.
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
Three different ways to acquire locks within the same w/w class. Common
|
||||
definitions for methods #1 and #2:
|
||||
|
||||
static DEFINE_WW_CLASS(ww_class);
|
||||
|
||||
struct obj {
|
||||
struct ww_mutex lock;
|
||||
/* obj data */
|
||||
};
|
||||
|
||||
struct obj_entry {
|
||||
struct list_head head;
|
||||
struct obj *obj;
|
||||
};
|
||||
|
||||
Method 1, using a list in execbuf->buffers that's not allowed to be reordered.
|
||||
This is useful if a list of required objects is already tracked somewhere.
|
||||
Furthermore the lock helper can use propagate the -EALREADY return code back to
|
||||
the caller as a signal that an object is twice on the list. This is useful if
|
||||
the list is constructed from userspace input and the ABI requires userspace to
|
||||
not have duplicate entries (e.g. for a gpu commandbuffer submission ioctl).
|
||||
|
||||
int lock_objs(struct list_head *list, struct ww_acquire_ctx *ctx)
|
||||
{
|
||||
struct obj *res_obj = NULL;
|
||||
struct obj_entry *contended_entry = NULL;
|
||||
struct obj_entry *entry;
|
||||
|
||||
ww_acquire_init(ctx, &ww_class);
|
||||
|
||||
retry:
|
||||
list_for_each_entry (entry, list, head) {
|
||||
if (entry->obj == res_obj) {
|
||||
res_obj = NULL;
|
||||
continue;
|
||||
}
|
||||
ret = ww_mutex_lock(&entry->obj->lock, ctx);
|
||||
if (ret < 0) {
|
||||
contended_entry = entry;
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
|
||||
ww_acquire_done(ctx);
|
||||
return 0;
|
||||
|
||||
err:
|
||||
list_for_each_entry_continue_reverse (entry, list, head)
|
||||
ww_mutex_unlock(&entry->obj->lock);
|
||||
|
||||
if (res_obj)
|
||||
ww_mutex_unlock(&res_obj->lock);
|
||||
|
||||
if (ret == -EDEADLK) {
|
||||
/* we lost out in a seqno race, lock and retry.. */
|
||||
ww_mutex_lock_slow(&contended_entry->obj->lock, ctx);
|
||||
res_obj = contended_entry->obj;
|
||||
goto retry;
|
||||
}
|
||||
ww_acquire_fini(ctx);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
Method 2, using a list in execbuf->buffers that can be reordered. Same semantics
|
||||
of duplicate entry detection using -EALREADY as method 1 above. But the
|
||||
list-reordering allows for a bit more idiomatic code.
|
||||
|
||||
int lock_objs(struct list_head *list, struct ww_acquire_ctx *ctx)
|
||||
{
|
||||
struct obj_entry *entry, *entry2;
|
||||
|
||||
ww_acquire_init(ctx, &ww_class);
|
||||
|
||||
list_for_each_entry (entry, list, head) {
|
||||
ret = ww_mutex_lock(&entry->obj->lock, ctx);
|
||||
if (ret < 0) {
|
||||
entry2 = entry;
|
||||
|
||||
list_for_each_entry_continue_reverse (entry2, list, head)
|
||||
ww_mutex_unlock(&entry2->obj->lock);
|
||||
|
||||
if (ret != -EDEADLK) {
|
||||
ww_acquire_fini(ctx);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* we lost out in a seqno race, lock and retry.. */
|
||||
ww_mutex_lock_slow(&entry->obj->lock, ctx);
|
||||
|
||||
/*
|
||||
* Move buf to head of the list, this will point
|
||||
* buf->next to the first unlocked entry,
|
||||
* restarting the for loop.
|
||||
*/
|
||||
list_del(&entry->head);
|
||||
list_add(&entry->head, list);
|
||||
}
|
||||
}
|
||||
|
||||
ww_acquire_done(ctx);
|
||||
return 0;
|
||||
}
|
||||
|
||||
Unlocking works the same way for both methods #1 and #2:
|
||||
|
||||
void unlock_objs(struct list_head *list, struct ww_acquire_ctx *ctx)
|
||||
{
|
||||
struct obj_entry *entry;
|
||||
|
||||
list_for_each_entry (entry, list, head)
|
||||
ww_mutex_unlock(&entry->obj->lock);
|
||||
|
||||
ww_acquire_fini(ctx);
|
||||
}
|
||||
|
||||
Method 3 is useful if the list of objects is constructed ad-hoc and not upfront,
|
||||
e.g. when adjusting edges in a graph where each node has its own ww_mutex lock,
|
||||
and edges can only be changed when holding the locks of all involved nodes. w/w
|
||||
mutexes are a natural fit for such a case for two reasons:
|
||||
- They can handle lock-acquisition in any order which allows us to start walking
|
||||
a graph from a starting point and then iteratively discovering new edges and
|
||||
locking down the nodes those edges connect to.
|
||||
- Due to the -EALREADY return code signalling that a given objects is already
|
||||
held there's no need for additional book-keeping to break cycles in the graph
|
||||
or keep track off which looks are already held (when using more than one node
|
||||
as a starting point).
|
||||
|
||||
Note that this approach differs in two important ways from the above methods:
|
||||
- Since the list of objects is dynamically constructed (and might very well be
|
||||
different when retrying due to hitting the -EDEADLK wound condition) there's
|
||||
no need to keep any object on a persistent list when it's not locked. We can
|
||||
therefore move the list_head into the object itself.
|
||||
- On the other hand the dynamic object list construction also means that the -EALREADY return
|
||||
code can't be propagated.
|
||||
|
||||
Note also that methods #1 and #2 and method #3 can be combined, e.g. to first lock a
|
||||
list of starting nodes (passed in from userspace) using one of the above
|
||||
methods. And then lock any additional objects affected by the operations using
|
||||
method #3 below. The backoff/retry procedure will be a bit more involved, since
|
||||
when the dynamic locking step hits -EDEADLK we also need to unlock all the
|
||||
objects acquired with the fixed list. But the w/w mutex debug checks will catch
|
||||
any interface misuse for these cases.
|
||||
|
||||
Also, method 3 can't fail the lock acquisition step since it doesn't return
|
||||
-EALREADY. Of course this would be different when using the _interruptible
|
||||
variants, but that's outside of the scope of these examples here.
|
||||
|
||||
struct obj {
|
||||
struct ww_mutex ww_mutex;
|
||||
struct list_head locked_list;
|
||||
};
|
||||
|
||||
static DEFINE_WW_CLASS(ww_class);
|
||||
|
||||
void __unlock_objs(struct list_head *list)
|
||||
{
|
||||
struct obj *entry, *temp;
|
||||
|
||||
list_for_each_entry_safe (entry, temp, list, locked_list) {
|
||||
/* need to do that before unlocking, since only the current lock holder is
|
||||
allowed to use object */
|
||||
list_del(&entry->locked_list);
|
||||
ww_mutex_unlock(entry->ww_mutex)
|
||||
}
|
||||
}
|
||||
|
||||
void lock_objs(struct list_head *list, struct ww_acquire_ctx *ctx)
|
||||
{
|
||||
struct obj *obj;
|
||||
|
||||
ww_acquire_init(ctx, &ww_class);
|
||||
|
||||
retry:
|
||||
/* re-init loop start state */
|
||||
loop {
|
||||
/* magic code which walks over a graph and decides which objects
|
||||
* to lock */
|
||||
|
||||
ret = ww_mutex_lock(obj->ww_mutex, ctx);
|
||||
if (ret == -EALREADY) {
|
||||
/* we have that one already, get to the next object */
|
||||
continue;
|
||||
}
|
||||
if (ret == -EDEADLK) {
|
||||
__unlock_objs(list);
|
||||
|
||||
ww_mutex_lock_slow(obj, ctx);
|
||||
list_add(&entry->locked_list, list);
|
||||
goto retry;
|
||||
}
|
||||
|
||||
/* locked a new object, add it to the list */
|
||||
list_add_tail(&entry->locked_list, list);
|
||||
}
|
||||
|
||||
ww_acquire_done(ctx);
|
||||
return 0;
|
||||
}
|
||||
|
||||
void unlock_objs(struct list_head *list, struct ww_acquire_ctx *ctx)
|
||||
{
|
||||
__unlock_objs(list);
|
||||
ww_acquire_fini(ctx);
|
||||
}
|
||||
|
||||
Method 4: Only lock one single objects. In that case deadlock detection and
|
||||
prevention is obviously overkill, since with grabbing just one lock you can't
|
||||
produce a deadlock within just one class. To simplify this case the w/w mutex
|
||||
api can be used with a NULL context.
|
||||
|
||||
Implementation Details
|
||||
----------------------
|
||||
|
||||
Design:
|
||||
ww_mutex currently encapsulates a struct mutex, this means no extra overhead for
|
||||
normal mutex locks, which are far more common. As such there is only a small
|
||||
increase in code size if wait/wound mutexes are not used.
|
||||
|
||||
In general, not much contention is expected. The locks are typically used to
|
||||
serialize access to resources for devices. The only way to make wakeups
|
||||
smarter would be at the cost of adding a field to struct mutex_waiter. This
|
||||
would add overhead to all cases where normal mutexes are used, and
|
||||
ww_mutexes are generally less performance sensitive.
|
||||
|
||||
Lockdep:
|
||||
Special care has been taken to warn for as many cases of api abuse
|
||||
as possible. Some common api abuses will be caught with
|
||||
CONFIG_DEBUG_MUTEXES, but CONFIG_PROVE_LOCKING is recommended.
|
||||
|
||||
Some of the errors which will be warned about:
|
||||
- Forgetting to call ww_acquire_fini or ww_acquire_init.
|
||||
- Attempting to lock more mutexes after ww_acquire_done.
|
||||
- Attempting to lock the wrong mutex after -EDEADLK and
|
||||
unlocking all mutexes.
|
||||
- Attempting to lock the right mutex after -EDEADLK,
|
||||
before unlocking all mutexes.
|
||||
|
||||
- Calling ww_mutex_lock_slow before -EDEADLK was returned.
|
||||
|
||||
- Unlocking mutexes with the wrong unlock function.
|
||||
- Calling one of the ww_acquire_* twice on the same context.
|
||||
- Using a different ww_class for the mutex than for the ww_acquire_ctx.
|
||||
- Normal lockdep errors that can result in deadlocks.
|
||||
|
||||
Some of the lockdep errors that can result in deadlocks:
|
||||
- Calling ww_acquire_init to initialize a second ww_acquire_ctx before
|
||||
having called ww_acquire_fini on the first.
|
||||
- 'normal' deadlocks that can occur.
|
||||
|
||||
FIXME: Update this section once we have the TASK_DEADLOCK task state flag magic
|
||||
implemented.
|
||||
@@ -29,17 +29,15 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
|
||||
* __mutex_fastpath_lock_retval - try to take the lock by moving the count
|
||||
* from 1 to a 0 value
|
||||
* @count: pointer of type atomic_t
|
||||
* @fail_fn: function to call if the original value was not 1
|
||||
*
|
||||
* Change the count from 1 to a value lower than 1, and call <fail_fn> if
|
||||
* it wasn't 1 originally. This function returns 0 if the fastpath succeeds,
|
||||
* or anything the slow path function returns.
|
||||
* Change the count from 1 to a value lower than 1. This function returns 0
|
||||
* if the fastpath succeeds, or -1 otherwise.
|
||||
*/
|
||||
static inline int
|
||||
__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
|
||||
__mutex_fastpath_lock_retval(atomic_t *count)
|
||||
{
|
||||
if (unlikely(ia64_fetchadd4_acq(count, -1) != 1))
|
||||
return fail_fn(count);
|
||||
return -1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -82,17 +82,15 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
|
||||
* __mutex_fastpath_lock_retval - try to take the lock by moving the count
|
||||
* from 1 to a 0 value
|
||||
* @count: pointer of type atomic_t
|
||||
* @fail_fn: function to call if the original value was not 1
|
||||
*
|
||||
* Change the count from 1 to a value lower than 1, and call <fail_fn> if
|
||||
* it wasn't 1 originally. This function returns 0 if the fastpath succeeds,
|
||||
* or anything the slow path function returns.
|
||||
* Change the count from 1 to a value lower than 1. This function returns 0
|
||||
* if the fastpath succeeds, or -1 otherwise.
|
||||
*/
|
||||
static inline int
|
||||
__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
|
||||
__mutex_fastpath_lock_retval(atomic_t *count)
|
||||
{
|
||||
if (unlikely(__mutex_dec_return_lock(count) < 0))
|
||||
return fail_fn(count);
|
||||
return -1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -97,22 +97,14 @@ static int fsl_indirect_read_config(struct pci_bus *bus, unsigned int devfn,
|
||||
return indirect_read_config(bus, devfn, offset, len, val);
|
||||
}
|
||||
|
||||
static struct pci_ops fsl_indirect_pci_ops =
|
||||
#if defined(CONFIG_FSL_SOC_BOOKE) || defined(CONFIG_PPC_86xx)
|
||||
|
||||
static struct pci_ops fsl_indirect_pcie_ops =
|
||||
{
|
||||
.read = fsl_indirect_read_config,
|
||||
.write = indirect_write_config,
|
||||
};
|
||||
|
||||
static void __init fsl_setup_indirect_pci(struct pci_controller* hose,
|
||||
resource_size_t cfg_addr,
|
||||
resource_size_t cfg_data, u32 flags)
|
||||
{
|
||||
setup_indirect_pci(hose, cfg_addr, cfg_data, flags);
|
||||
hose->ops = &fsl_indirect_pci_ops;
|
||||
}
|
||||
|
||||
#if defined(CONFIG_FSL_SOC_BOOKE) || defined(CONFIG_PPC_86xx)
|
||||
|
||||
#define MAX_PHYS_ADDR_BITS 40
|
||||
static u64 pci64_dma_offset = 1ull << MAX_PHYS_ADDR_BITS;
|
||||
|
||||
@@ -504,13 +496,15 @@ int __init fsl_add_bridge(struct platform_device *pdev, int is_primary)
|
||||
if (!hose->private_data)
|
||||
goto no_bridge;
|
||||
|
||||
fsl_setup_indirect_pci(hose, rsrc.start, rsrc.start + 0x4,
|
||||
setup_indirect_pci(hose, rsrc.start, rsrc.start + 0x4,
|
||||
PPC_INDIRECT_TYPE_BIG_ENDIAN);
|
||||
|
||||
if (in_be32(&pci->block_rev1) < PCIE_IP_REV_3_0)
|
||||
hose->indirect_type |= PPC_INDIRECT_TYPE_FSL_CFG_REG_LINK;
|
||||
|
||||
if (early_find_capability(hose, 0, 0, PCI_CAP_ID_EXP)) {
|
||||
/* use fsl_indirect_read_config for PCIe */
|
||||
hose->ops = &fsl_indirect_pcie_ops;
|
||||
/* For PCIE read HEADER_TYPE to identify controler mode */
|
||||
early_read_config_byte(hose, 0, 0, PCI_HEADER_TYPE, &hdr_type);
|
||||
if ((hdr_type & 0x7f) != PCI_HEADER_TYPE_BRIDGE)
|
||||
@@ -814,7 +808,7 @@ int __init mpc83xx_add_bridge(struct device_node *dev)
|
||||
if (ret)
|
||||
goto err0;
|
||||
} else {
|
||||
fsl_setup_indirect_pci(hose, rsrc_cfg.start,
|
||||
setup_indirect_pci(hose, rsrc_cfg.start,
|
||||
rsrc_cfg.start + 4, 0);
|
||||
}
|
||||
|
||||
|
||||
@@ -50,9 +50,10 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
|
||||
{
|
||||
struct dma_map_ops *dma_ops = get_dma_ops(dev);
|
||||
|
||||
debug_dma_mapping_error(dev, dma_addr);
|
||||
if (dma_ops->mapping_error)
|
||||
return dma_ops->mapping_error(dev, dma_addr);
|
||||
return (dma_addr == 0UL);
|
||||
return (dma_addr == DMA_ERROR_CODE);
|
||||
}
|
||||
|
||||
static inline void *dma_alloc_coherent(struct device *dev, size_t size,
|
||||
|
||||
@@ -754,9 +754,9 @@ static struct bin_attribute sys_reipl_fcp_scp_data_attr = {
|
||||
.write = reipl_fcp_scpdata_write,
|
||||
};
|
||||
|
||||
DEFINE_IPL_ATTR_RW(reipl_fcp, wwpn, "0x%016llx\n", "%016llx\n",
|
||||
DEFINE_IPL_ATTR_RW(reipl_fcp, wwpn, "0x%016llx\n", "%llx\n",
|
||||
reipl_block_fcp->ipl_info.fcp.wwpn);
|
||||
DEFINE_IPL_ATTR_RW(reipl_fcp, lun, "0x%016llx\n", "%016llx\n",
|
||||
DEFINE_IPL_ATTR_RW(reipl_fcp, lun, "0x%016llx\n", "%llx\n",
|
||||
reipl_block_fcp->ipl_info.fcp.lun);
|
||||
DEFINE_IPL_ATTR_RW(reipl_fcp, bootprog, "%lld\n", "%lld\n",
|
||||
reipl_block_fcp->ipl_info.fcp.bootprog);
|
||||
@@ -1323,9 +1323,9 @@ static struct shutdown_action __refdata reipl_action = {
|
||||
|
||||
/* FCP dump device attributes */
|
||||
|
||||
DEFINE_IPL_ATTR_RW(dump_fcp, wwpn, "0x%016llx\n", "%016llx\n",
|
||||
DEFINE_IPL_ATTR_RW(dump_fcp, wwpn, "0x%016llx\n", "%llx\n",
|
||||
dump_block_fcp->ipl_info.fcp.wwpn);
|
||||
DEFINE_IPL_ATTR_RW(dump_fcp, lun, "0x%016llx\n", "%016llx\n",
|
||||
DEFINE_IPL_ATTR_RW(dump_fcp, lun, "0x%016llx\n", "%llx\n",
|
||||
dump_block_fcp->ipl_info.fcp.lun);
|
||||
DEFINE_IPL_ATTR_RW(dump_fcp, bootprog, "%lld\n", "%lld\n",
|
||||
dump_block_fcp->ipl_info.fcp.bootprog);
|
||||
|
||||
@@ -312,6 +312,7 @@ void measurement_alert_subclass_unregister(void)
|
||||
}
|
||||
EXPORT_SYMBOL(measurement_alert_subclass_unregister);
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
void synchronize_irq(unsigned int irq)
|
||||
{
|
||||
/*
|
||||
@@ -320,6 +321,7 @@ void synchronize_irq(unsigned int irq)
|
||||
*/
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(synchronize_irq);
|
||||
#endif
|
||||
|
||||
#ifndef CONFIG_PCI
|
||||
|
||||
|
||||
@@ -123,7 +123,8 @@ void create_mem_hole(struct mem_chunk mem_chunk[], unsigned long addr,
|
||||
continue;
|
||||
} else if ((addr <= chunk->addr) &&
|
||||
(addr + size >= chunk->addr + chunk->size)) {
|
||||
memset(chunk, 0 , sizeof(*chunk));
|
||||
memmove(chunk, chunk + 1, (MEMORY_CHUNKS-i-1) * sizeof(*chunk));
|
||||
memset(&mem_chunk[MEMORY_CHUNKS-1], 0, sizeof(*chunk));
|
||||
} else if (addr + size < chunk->addr + chunk->size) {
|
||||
chunk->size = chunk->addr + chunk->size - addr - size;
|
||||
chunk->addr = addr + size;
|
||||
|
||||
@@ -37,7 +37,7 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
|
||||
}
|
||||
|
||||
static inline int
|
||||
__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
|
||||
__mutex_fastpath_lock_retval(atomic_t *count)
|
||||
{
|
||||
int __done, __res;
|
||||
|
||||
@@ -51,7 +51,7 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
|
||||
: "t");
|
||||
|
||||
if (unlikely(!__done || __res != 0))
|
||||
__res = fail_fn(count);
|
||||
__res = -1;
|
||||
|
||||
return __res;
|
||||
}
|
||||
|
||||
@@ -42,17 +42,14 @@ do { \
|
||||
* __mutex_fastpath_lock_retval - try to take the lock by moving the count
|
||||
* from 1 to a 0 value
|
||||
* @count: pointer of type atomic_t
|
||||
* @fail_fn: function to call if the original value was not 1
|
||||
*
|
||||
* Change the count from 1 to a value lower than 1, and call <fail_fn> if it
|
||||
* wasn't 1 originally. This function returns 0 if the fastpath succeeds,
|
||||
* or anything the slow path function returns
|
||||
* Change the count from 1 to a value lower than 1. This function returns 0
|
||||
* if the fastpath succeeds, or -1 otherwise.
|
||||
*/
|
||||
static inline int __mutex_fastpath_lock_retval(atomic_t *count,
|
||||
int (*fail_fn)(atomic_t *))
|
||||
static inline int __mutex_fastpath_lock_retval(atomic_t *count)
|
||||
{
|
||||
if (unlikely(atomic_dec_return(count) < 0))
|
||||
return fail_fn(count);
|
||||
return -1;
|
||||
else
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -37,17 +37,14 @@ do { \
|
||||
* __mutex_fastpath_lock_retval - try to take the lock by moving the count
|
||||
* from 1 to a 0 value
|
||||
* @count: pointer of type atomic_t
|
||||
* @fail_fn: function to call if the original value was not 1
|
||||
*
|
||||
* Change the count from 1 to a value lower than 1, and call <fail_fn> if
|
||||
* it wasn't 1 originally. This function returns 0 if the fastpath succeeds,
|
||||
* or anything the slow path function returns
|
||||
* Change the count from 1 to a value lower than 1. This function returns 0
|
||||
* if the fastpath succeeds, or -1 otherwise.
|
||||
*/
|
||||
static inline int __mutex_fastpath_lock_retval(atomic_t *count,
|
||||
int (*fail_fn)(atomic_t *))
|
||||
static inline int __mutex_fastpath_lock_retval(atomic_t *count)
|
||||
{
|
||||
if (unlikely(atomic_dec_return(count) < 0))
|
||||
return fail_fn(count);
|
||||
return -1;
|
||||
else
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -137,7 +137,7 @@ static const struct xpad_device {
|
||||
{ 0x0738, 0x4540, "Mad Catz Beat Pad", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX },
|
||||
{ 0x0738, 0x4556, "Mad Catz Lynx Wireless Controller", 0, XTYPE_XBOX },
|
||||
{ 0x0738, 0x4716, "Mad Catz Wired Xbox 360 Controller", 0, XTYPE_XBOX360 },
|
||||
{ 0x0738, 0x4728, "Mad Catz Street Fighter IV FightPad", XTYPE_XBOX360 },
|
||||
{ 0x0738, 0x4728, "Mad Catz Street Fighter IV FightPad", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
|
||||
{ 0x0738, 0x4738, "Mad Catz Wired Xbox 360 Controller (SFIV)", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
|
||||
{ 0x0738, 0x6040, "Mad Catz Beat Pad Pro", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX },
|
||||
{ 0x0738, 0xbeef, "Mad Catz JOYTECH NEO SE Advanced GamePad", XTYPE_XBOX360 },
|
||||
|
||||
@@ -431,6 +431,7 @@ config KEYBOARD_TEGRA
|
||||
|
||||
config KEYBOARD_OPENCORES
|
||||
tristate "OpenCores Keyboard Controller"
|
||||
depends on HAS_IOMEM
|
||||
help
|
||||
Say Y here if you want to use the OpenCores Keyboard Controller
|
||||
http://www.opencores.org/project,keyboardcontroller
|
||||
|
||||
@@ -205,6 +205,7 @@ config SERIO_XILINX_XPS_PS2
|
||||
|
||||
config SERIO_ALTERA_PS2
|
||||
tristate "Altera UP PS/2 controller"
|
||||
depends on HAS_IOMEM
|
||||
help
|
||||
Say Y here if you have Altera University Program PS/2 ports.
|
||||
|
||||
|
||||
@@ -363,6 +363,7 @@ static int wacom_intuos_inout(struct wacom_wac *wacom)
|
||||
case 0x140802: /* Intuos4/5 13HD/24HD Classic Pen */
|
||||
case 0x160802: /* Cintiq 13HD Pro Pen */
|
||||
case 0x180802: /* DTH2242 Pen */
|
||||
case 0x100802: /* Intuos4/5 13HD/24HD General Pen */
|
||||
wacom->tool[idx] = BTN_TOOL_PEN;
|
||||
break;
|
||||
|
||||
@@ -401,6 +402,7 @@ static int wacom_intuos_inout(struct wacom_wac *wacom)
|
||||
case 0x10080c: /* Intuos4/5 13HD/24HD Art Pen Eraser */
|
||||
case 0x16080a: /* Cintiq 13HD Pro Pen Eraser */
|
||||
case 0x18080a: /* DTH2242 Eraser */
|
||||
case 0x10080a: /* Intuos4/5 13HD/24HD General Pen Eraser */
|
||||
wacom->tool[idx] = BTN_TOOL_RUBBER;
|
||||
break;
|
||||
|
||||
|
||||
@@ -116,6 +116,15 @@ static int ttsp_send_command(struct cyttsp *ts, u8 cmd)
|
||||
return ttsp_write_block_data(ts, CY_REG_BASE, sizeof(cmd), &cmd);
|
||||
}
|
||||
|
||||
static int cyttsp_handshake(struct cyttsp *ts)
|
||||
{
|
||||
if (ts->pdata->use_hndshk)
|
||||
return ttsp_send_command(ts,
|
||||
ts->xy_data.hst_mode ^ CY_HNDSHK_BIT);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cyttsp_load_bl_regs(struct cyttsp *ts)
|
||||
{
|
||||
memset(&ts->bl_data, 0, sizeof(ts->bl_data));
|
||||
@@ -133,7 +142,7 @@ static int cyttsp_exit_bl_mode(struct cyttsp *ts)
|
||||
memcpy(bl_cmd, bl_command, sizeof(bl_command));
|
||||
if (ts->pdata->bl_keys)
|
||||
memcpy(&bl_cmd[sizeof(bl_command) - CY_NUM_BL_KEYS],
|
||||
ts->pdata->bl_keys, sizeof(bl_command));
|
||||
ts->pdata->bl_keys, CY_NUM_BL_KEYS);
|
||||
|
||||
error = ttsp_write_block_data(ts, CY_REG_BASE,
|
||||
sizeof(bl_cmd), bl_cmd);
|
||||
@@ -167,6 +176,10 @@ static int cyttsp_set_operational_mode(struct cyttsp *ts)
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
error = cyttsp_handshake(ts);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
return ts->xy_data.act_dist == CY_ACT_DIST_DFLT ? -EIO : 0;
|
||||
}
|
||||
|
||||
@@ -188,6 +201,10 @@ static int cyttsp_set_sysinfo_mode(struct cyttsp *ts)
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
error = cyttsp_handshake(ts);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
if (!ts->sysinfo_data.tts_verh && !ts->sysinfo_data.tts_verl)
|
||||
return -EIO;
|
||||
|
||||
@@ -344,12 +361,9 @@ static irqreturn_t cyttsp_irq(int irq, void *handle)
|
||||
goto out;
|
||||
|
||||
/* provide flow control handshake */
|
||||
if (ts->pdata->use_hndshk) {
|
||||
error = ttsp_send_command(ts,
|
||||
ts->xy_data.hst_mode ^ CY_HNDSHK_BIT);
|
||||
error = cyttsp_handshake(ts);
|
||||
if (error)
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (unlikely(ts->state == CY_IDLE_STATE))
|
||||
goto out;
|
||||
|
||||
@@ -67,8 +67,8 @@ struct cyttsp_xydata {
|
||||
/* TTSP System Information interface definition */
|
||||
struct cyttsp_sysinfo_data {
|
||||
u8 hst_mode;
|
||||
u8 mfg_cmd;
|
||||
u8 mfg_stat;
|
||||
u8 mfg_cmd;
|
||||
u8 cid[3];
|
||||
u8 tt_undef1;
|
||||
u8 uid[8];
|
||||
|
||||
@@ -59,7 +59,7 @@ static int pxa2xx_spi_map_dma_buffer(struct driver_data *drv_data,
|
||||
int ret;
|
||||
|
||||
sg_free_table(sgt);
|
||||
ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
|
||||
ret = sg_alloc_table(sgt, nents, GFP_ATOMIC);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -1075,7 +1075,7 @@ pxa2xx_spi_acpi_get_pdata(struct platform_device *pdev)
|
||||
acpi_bus_get_device(ACPI_HANDLE(&pdev->dev), &adev))
|
||||
return NULL;
|
||||
|
||||
pdata = devm_kzalloc(&pdev->dev, sizeof(*ssp), GFP_KERNEL);
|
||||
pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL);
|
||||
if (!pdata) {
|
||||
dev_err(&pdev->dev,
|
||||
"failed to allocate memory for platform data\n");
|
||||
|
||||
@@ -444,7 +444,7 @@ static int s3c64xx_spi_prepare_transfer(struct spi_master *spi)
|
||||
}
|
||||
|
||||
ret = pm_runtime_get_sync(&sdd->pdev->dev);
|
||||
if (ret != 0) {
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "Failed to enable device: %d\n", ret);
|
||||
goto out_tx;
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user