This patch removes the CVS keywords that weren't updated for a long time
from comments.
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Almost all implementations of pci_iomap() in the kernel, including the generic
lib/iomap.c one, copies the content of a struct resource into unsigned long's
which will break on 32 bits platforms with 64 bits resources.
This fixes all definitions of pci_iomap() to use resource_size_t. I also
"fixed" the 64bits arch for consistency.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
A few places missed the "a" specifier for the __ex_table section. Add
these so we avoid generation an additional section at link time.
Latest modpost would otherwise complain like this:
WARNING: vmlinux.o (__ex_table.2): section name inconsistency.
(.[number]+) following section name.
Did you forget to use "ax"/"aw" in a .S file?
Note that for example <linux/init.h> contains
section definitions for use in .S files.
WARNING: vmlinux.o (__ex_table.4): section name inconsistency.
(.[number]+) following section name.
Did you forget to use "ax"/"aw" in a .S file?
Note that for example <linux/init.h> contains
section definitions for use in .S files.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
ld will generate an unique named section when assembler do not use
"ax" but gcc does. Add the missing annotation.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
When the cpu count is high and contention hits an atomic object, the
processors can synchronize such that some cpus continually get knocked
out and cannot complete the atomic update.
So implement an exponential backoff when SMP.
Signed-off-by: David S. Miller <davem@davemloft.net>
Some typos led to using %i6/%i7 instead of %l6/%l7 in loads which is
really really bad because those are the frame pointer and return PC.
Based upon a raid5 crash report by Bertrand Joel.
Signed-off-by: David S. Miller <davem@davemloft.net>
It doesn't matter for use in 64-bit objects, but when used in
32-bit environments the top 32-bits of the local and in
registers will get chopped off on the next register window
spill/restore which leads to difficult to track down and
subtle bugs.
Signed-off-by: David S. Miller <davem@davemloft.net>
For the case where the source is not aligned modulo 8
we don't use load-twins to suck the data in and this
kills performance since normal loads allocate in the
L1 cache (unlike load-twin) and thus big memcpys swipe
the entire L1 D-cache.
We need to allocate a register window to implement this
properly, but that actually simplifies a lot of things
as a nice side-effect.
Signed-off-by: David S. Miller <davem@davemloft.net>
Check the cpu type in the OBP device tree before committing to
using the optimized Niagara memcpy and memset implementation.
If we don't recognize the cpu type, use a completely generic
version.
Signed-off-by: David S. Miller <davem@davemloft.net>
Take a page from the powerpc folks and just calculate the
delay factor directly.
Since frequency scaling chips use a system-tick register,
the value is going to be the same system-wide.
Signed-off-by: David S. Miller <davem@davemloft.net>
The manual says that it is required and we actually have crash reports
where loads see stale data due to not having membars here.
In one case the networking does:
memset(skb, 0, offsetof(struct sk_buff, truesize));
and then some code later checks skb->nohdr for zero, but it's still
the value that was there before the memset().
Note that arch/sparc64/lib/xor.S already got this right.
Signed-off-by: David S. Miller <davem@davemloft.net>
Both csum_partial() and the csum_partial_copy*() family of routines
forget to do a final fold on the computed checksum value on sparc64.
So do the standard Sparc "add + set condition codes, add carry"
sequence, then make sure the high 32-bits of the return value are
clear.
Based upon some excellent detective work and debugging done by
Richard Braun and Samuel Thibault.
Signed-off-by: David S. Miller <davem@davemloft.net>
We only need to write an invalid tag every 16 bytes,
so taking advantage of this can save many instructions
compared to the simple memset() call we make now.
A prefetching implementation is implemented for sun4u
and a block-init store version if implemented for Niagara.
The next trick is to be able to perform an init and
a copy_tsb() in parallel when growing a TSB table.
Signed-off-by: David S. Miller <davem@davemloft.net>
This gives more consistent bogomips and delay() semantics,
especially on sun4v. It gives weird looking values though...
Signed-off-by: David S. Miller <davem@davemloft.net>
Yes, you heard it right, they changed the PTE layout for
SUN4V. Ho hum...
This is the simple and inefficient way to support this.
It'll get optimized, don't worry.
Signed-off-by: David S. Miller <davem@davemloft.net>
We need to restore the %asi register properly.
For the kernel this means get_fs(), for user this
means ASI_PNF.
Also, NGcopy_to_user.S was including U3memcpy.S instead
of NGmemcpy.S, oops :-)
Signed-off-by: David S. Miller <davem@davemloft.net>
Happily we have no D-cache aliasing issues on these
chips, so the implementation is very straightforward.
Add a stub in bootup which will be where the patching
calls will be made for niagara/sun4v/hypervisor.
Signed-off-by: David S. Miller <davem@davemloft.net>