Merge branch 'master'

This commit is contained in:
David Teigland
2006-01-18 09:14:51 +00:00
committed by Steven Whitehouse
342 changed files with 12227 additions and 14370 deletions

View File

@@ -0,0 +1,271 @@
I/O Barriers
============
Tejun Heo <htejun@gmail.com>, July 22 2005
I/O barrier requests are used to guarantee ordering around the barrier
requests. Unless you're crazy enough to use disk drives for
implementing synchronization constructs (wow, sounds interesting...),
the ordering is meaningful only for write requests for things like
journal checkpoints. All requests queued before a barrier request
must be finished (made it to the physical medium) before the barrier
request is started, and all requests queued after the barrier request
must be started only after the barrier request is finished (again,
made it to the physical medium).
In other words, I/O barrier requests have the following two properties.
1. Request ordering
Requests cannot pass the barrier request. Preceding requests are
processed before the barrier and following requests after.
Depending on what features a drive supports, this can be done in one
of the following three ways.
i. For devices which have queue depth greater than 1 (TCQ devices) and
support ordered tags, block layer can just issue the barrier as an
ordered request and the lower level driver, controller and drive
itself are responsible for making sure that the ordering contraint is
met. Most modern SCSI controllers/drives should support this.
NOTE: SCSI ordered tag isn't currently used due to limitation in the
SCSI midlayer, see the following random notes section.
ii. For devices which have queue depth greater than 1 but don't
support ordered tags, block layer ensures that the requests preceding
a barrier request finishes before issuing the barrier request. Also,
it defers requests following the barrier until the barrier request is
finished. Older SCSI controllers/drives and SATA drives fall in this
category.
iii. Devices which have queue depth of 1. This is a degenerate case
of ii. Just keeping issue order suffices. Ancient SCSI
controllers/drives and IDE drives are in this category.
2. Forced flushing to physcial medium
Again, if you're not gonna do synchronization with disk drives (dang,
it sounds even more appealing now!), the reason you use I/O barriers
is mainly to protect filesystem integrity when power failure or some
other events abruptly stop the drive from operating and possibly make
the drive lose data in its cache. So, I/O barriers need to guarantee
that requests actually get written to non-volatile medium in order.
There are four cases,
i. No write-back cache. Keeping requests ordered is enough.
ii. Write-back cache but no flush operation. There's no way to
gurantee physical-medium commit order. This kind of devices can't to
I/O barriers.
iii. Write-back cache and flush operation but no FUA (forced unit
access). We need two cache flushes - before and after the barrier
request.
iv. Write-back cache, flush operation and FUA. We still need one
flush to make sure requests preceding a barrier are written to medium,
but post-barrier flush can be avoided by using FUA write on the
barrier itself.
How to support barrier requests in drivers
------------------------------------------
All barrier handling is done inside block layer proper. All low level
drivers have to are implementing its prepare_flush_fn and using one
the following two functions to indicate what barrier type it supports
and how to prepare flush requests. Note that the term 'ordered' is
used to indicate the whole sequence of performing barrier requests
including draining and flushing.
typedef void (prepare_flush_fn)(request_queue_t *q, struct request *rq);
int blk_queue_ordered(request_queue_t *q, unsigned ordered,
prepare_flush_fn *prepare_flush_fn,
unsigned gfp_mask);
int blk_queue_ordered_locked(request_queue_t *q, unsigned ordered,
prepare_flush_fn *prepare_flush_fn,
unsigned gfp_mask);
The only difference between the two functions is whether or not the
caller is holding q->queue_lock on entry. The latter expects the
caller is holding the lock.
@q : the queue in question
@ordered : the ordered mode the driver/device supports
@prepare_flush_fn : this function should prepare @rq such that it
flushes cache to physical medium when executed
@gfp_mask : gfp_mask used when allocating data structures
for ordered processing
For example, SCSI disk driver's prepare_flush_fn looks like the
following.
static void sd_prepare_flush(request_queue_t *q, struct request *rq)
{
memset(rq->cmd, 0, sizeof(rq->cmd));
rq->flags |= REQ_BLOCK_PC;
rq->timeout = SD_TIMEOUT;
rq->cmd[0] = SYNCHRONIZE_CACHE;
}
The following seven ordered modes are supported. The following table
shows which mode should be used depending on what features a
device/driver supports. In the leftmost column of table,
QUEUE_ORDERED_ prefix is omitted from the mode names to save space.
The table is followed by description of each mode. Note that in the
descriptions of QUEUE_ORDERED_DRAIN*, '=>' is used whereas '->' is
used for QUEUE_ORDERED_TAG* descriptions. '=>' indicates that the
preceding step must be complete before proceeding to the next step.
'->' indicates that the next step can start as soon as the previous
step is issued.
write-back cache ordered tag flush FUA
-----------------------------------------------------------------------
NONE yes/no N/A no N/A
DRAIN no no N/A N/A
DRAIN_FLUSH yes no yes no
DRAIN_FUA yes no yes yes
TAG no yes N/A N/A
TAG_FLUSH yes yes yes no
TAG_FUA yes yes yes yes
QUEUE_ORDERED_NONE
I/O barriers are not needed and/or supported.
Sequence: N/A
QUEUE_ORDERED_DRAIN
Requests are ordered by draining the request queue and cache
flushing isn't needed.
Sequence: drain => barrier
QUEUE_ORDERED_DRAIN_FLUSH
Requests are ordered by draining the request queue and both
pre-barrier and post-barrier cache flushings are needed.
Sequence: drain => preflush => barrier => postflush
QUEUE_ORDERED_DRAIN_FUA
Requests are ordered by draining the request queue and
pre-barrier cache flushing is needed. By using FUA on barrier
request, post-barrier flushing can be skipped.
Sequence: drain => preflush => barrier
QUEUE_ORDERED_TAG
Requests are ordered by ordered tag and cache flushing isn't
needed.
Sequence: barrier
QUEUE_ORDERED_TAG_FLUSH
Requests are ordered by ordered tag and both pre-barrier and
post-barrier cache flushings are needed.
Sequence: preflush -> barrier -> postflush
QUEUE_ORDERED_TAG_FUA
Requests are ordered by ordered tag and pre-barrier cache
flushing is needed. By using FUA on barrier request,
post-barrier flushing can be skipped.
Sequence: preflush -> barrier
Random notes/caveats
--------------------
* SCSI layer currently can't use TAG ordering even if the drive,
controller and driver support it. The problem is that SCSI midlayer
request dispatch function is not atomic. It releases queue lock and
switch to SCSI host lock during issue and it's possible and likely to
happen in time that requests change their relative positions. Once
this problem is solved, TAG ordering can be enabled.
* Currently, no matter which ordered mode is used, there can be only
one barrier request in progress. All I/O barriers are held off by
block layer until the previous I/O barrier is complete. This doesn't
make any difference for DRAIN ordered devices, but, for TAG ordered
devices with very high command latency, passing multiple I/O barriers
to low level *might* be helpful if they are very frequent. Well, this
certainly is a non-issue. I'm writing this just to make clear that no
two I/O barrier is ever passed to low-level driver.
* Completion order. Requests in ordered sequence are issued in order
but not required to finish in order. Barrier implementation can
handle out-of-order completion of ordered sequence. IOW, the requests
MUST be processed in order but the hardware/software completion paths
are allowed to reorder completion notifications - eg. current SCSI
midlayer doesn't preserve completion order during error handling.
* Requeueing order. Low-level drivers are free to requeue any request
after they removed it from the request queue with
blkdev_dequeue_request(). As barrier sequence should be kept in order
when requeued, generic elevator code takes care of putting requests in
order around barrier. See blk_ordered_req_seq() and
ELEVATOR_INSERT_REQUEUE handling in __elv_add_request() for details.
Note that block drivers must not requeue preceding requests while
completing latter requests in an ordered sequence. Currently, no
error checking is done against this.
* Error handling. Currently, block layer will report error to upper
layer if any of requests in an ordered sequence fails. Unfortunately,
this doesn't seem to be enough. Look at the following request flow.
QUEUE_ORDERED_TAG_FLUSH is in use.
[0] [1] [2] [3] [pre] [barrier] [post] < [4] [5] [6] ... >
still in elevator
Let's say request [2], [3] are write requests to update file system
metadata (journal or whatever) and [barrier] is used to mark that
those updates are valid. Consider the following sequence.
i. Requests [0] ~ [post] leaves the request queue and enters
low-level driver.
ii. After a while, unfortunately, something goes wrong and the
drive fails [2]. Note that any of [0], [1] and [3] could have
completed by this time, but [pre] couldn't have been finished
as the drive must process it in order and it failed before
processing that command.
iii. Error handling kicks in and determines that the error is
unrecoverable and fails [2], and resumes operation.
iv. [pre] [barrier] [post] gets processed.
v. *BOOM* power fails
The problem here is that the barrier request is *supposed* to indicate
that filesystem update requests [2] and [3] made it safely to the
physical medium and, if the machine crashes after the barrier is
written, filesystem recovery code can depend on that. Sadly, that
isn't true in this case anymore. IOW, the success of a I/O barrier
should also be dependent on success of some of the preceding requests,
where only upper layer (filesystem) knows what 'some' is.
This can be solved by implementing a way to tell the block layer which
requests affect the success of the following barrier request and
making lower lever drivers to resume operation on error only after
block layer tells it to do so.
As the probability of this happening is very low and the drive should
be faulty, implementing the fix is probably an overkill. But, still,
it's there.
* In previous drafts of barrier implementation, there was fallback
mechanism such that, if FUA or ordered TAG fails, less fancy ordered
mode can be selected and the failed barrier request is retried
automatically. The rationale for this feature was that as FUA is
pretty new in ATA world and ordered tag was never used widely, there
could be devices which report to support those features but choke when
actually given such requests.
This was removed for two reasons 1. it's an overkill 2. it's
impossible to implement properly when TAG ordering is used as low
level drivers resume after an error automatically. If it's ever
needed adding it back and modifying low level drivers accordingly
shouldn't be difficult.

View File

@@ -86,6 +86,62 @@ Mount options
The default is infinite. Note that the size of read requests is
limited anyway to 32 pages (which is 128kbyte on i386).
Sysfs
~~~~~
FUSE sets up the following hierarchy in sysfs:
/sys/fs/fuse/connections/N/
where N is an increasing number allocated to each new connection.
For each connection the following attributes are defined:
'waiting'
The number of requests which are waiting to be transfered to
userspace or being processed by the filesystem daemon. If there is
no filesystem activity and 'waiting' is non-zero, then the
filesystem is hung or deadlocked.
'abort'
Writing anything into this file will abort the filesystem
connection. This means that all waiting requests will be aborted an
error returned for all aborted and new requests.
Only a privileged user may read or write these attributes.
Aborting a filesystem connection
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is possible to get into certain situations where the filesystem is
not responding. Reasons for this may be:
a) Broken userspace filesystem implementation
b) Network connection down
c) Accidental deadlock
d) Malicious deadlock
(For more on c) and d) see later sections)
In either of these cases it may be useful to abort the connection to
the filesystem. There are several ways to do this:
- Kill the filesystem daemon. Works in case of a) and b)
- Kill the filesystem daemon and all users of the filesystem. Works
in all cases except some malicious deadlocks
- Use forced umount (umount -f). Works in all cases but only if
filesystem is still attached (it hasn't been lazy unmounted)
- Abort filesystem through the sysfs interface. Most powerful
method, always works.
How do non-privileged mounts work?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -313,3 +369,10 @@ faulted with get_user_pages(). The 'req->locked' flag indicates
when the copy is taking place, and interruption is delayed until
this flag is unset.
Scenario 3 - Tricky deadlock with asynchronous read
---------------------------------------------------
The same situation as above, except thread-1 will wait on page lock
and hence it will be uninterruptible as well. The solution is to
abort the connection with forced umount (if mount is attached) or
through the abort attribute in sysfs.

View File

@@ -68,3 +68,4 @@ tuner=66 - LG NTSC (TALN mini series)
tuner=67 - Philips TD1316 Hybrid Tuner
tuner=68 - Philips TUV1236D ATSC/NTSC dual in
tuner=69 - Tena TNF 5335 MF
tuner=70 - Samsung TCPN 2121P30A

View File

@@ -1696,11 +1696,13 @@ M: mtk-manpages@gmx.net
W: ftp://ftp.kernel.org/pub/linux/docs/manpages
S: Maintained
MARVELL MV64340 ETHERNET DRIVER
MARVELL MV643XX ETHERNET DRIVER
P: Dale Farnsworth
M: dale@farnsworth.org
P: Manish Lachwani
L: linux-mips@linux-mips.org
M: mlachwani@mvista.com
L: netdev@vger.kernel.org
S: Supported
S: Odd Fixes for 2.4; Maintained for 2.6.
MATROX FRAMEBUFFER DRIVER
P: Petr Vandrovec

View File

@@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 15
EXTRAVERSION =
SUBLEVEL = 16
EXTRAVERSION =-rc1
NAME=Sliding Snow Leopard
# *DOCUMENTATION*
@@ -106,12 +106,13 @@ KBUILD_OUTPUT := $(shell cd $(KBUILD_OUTPUT) && /bin/pwd)
$(if $(KBUILD_OUTPUT),, \
$(error output directory "$(saved-output)" does not exist))
.PHONY: $(MAKECMDGOALS)
.PHONY: $(MAKECMDGOALS) cdbuilddir
$(MAKECMDGOALS) _all: cdbuilddir
$(filter-out _all,$(MAKECMDGOALS)) _all:
cdbuilddir:
$(if $(KBUILD_VERBOSE:1=),@)$(MAKE) -C $(KBUILD_OUTPUT) \
KBUILD_SRC=$(CURDIR) \
KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile $@
KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile $(MAKECMDGOALS)
# Leave processing to above invocation of make
skip-makefile := 1
@@ -262,6 +263,13 @@ export quiet Q KBUILD_VERBOSE
# cc support functions to be used (only) in arch/$(ARCH)/Makefile
# See documentation in Documentation/kbuild/makefiles.txt
# as-option
# Usage: cflags-y += $(call as-option, -Wa$(comma)-isa=foo,)
as-option = $(shell if $(CC) $(CFLAGS) $(1) -Wa,-Z -c -o /dev/null \
-xassembler /dev/null > /dev/null 2>&1; then echo "$(1)"; \
else echo "$(2)"; fi ;)
# cc-option
# Usage: cflags-y += $(call cc-option, -march=winchip-c6, -march=i586)
@@ -337,8 +345,9 @@ AFLAGS := -D__ASSEMBLY__
# Read KERNELRELEASE from .kernelrelease (if it exists)
KERNELRELEASE = $(shell cat .kernelrelease 2> /dev/null)
KERNELVERSION = $(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)
export VERSION PATCHLEVEL SUBLEVEL KERNELRELEASE \
export VERSION PATCHLEVEL SUBLEVEL KERNELRELEASE KERNELVERSION \
ARCH CONFIG_SHELL HOSTCC HOSTCFLAGS CROSS_COMPILE AS LD CC \
CPP AR NM STRIP OBJCOPY OBJDUMP MAKE AWK GENKSYMS PERL UTS_MACHINE \
HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
@@ -433,6 +442,7 @@ export KBUILD_DEFCONFIG
config %config: scripts_basic outputmakefile FORCE
$(Q)mkdir -p include/linux
$(Q)$(MAKE) $(build)=scripts/kconfig $@
$(Q)$(MAKE) .kernelrelease
else
# ===========================================================================
@@ -542,7 +552,7 @@ export INSTALL_PATH ?= /boot
# makefile but the arguement can be passed to make if needed.
#
MODLIB := $(INSTALL_MOD_PATH)/lib/modules/$(KERNELRELEASE)
MODLIB = $(INSTALL_MOD_PATH)/lib/modules/$(KERNELRELEASE)
export MODLIB
@@ -783,12 +793,10 @@ endif
localver-full = $(localver)$(localver-auto)
# Store (new) KERNELRELASE string in .kernelrelease
kernelrelease = \
$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)$(localver-full)
kernelrelease = $(KERNELVERSION)$(localver-full)
.kernelrelease: FORCE
$(Q)rm -f .kernelrelease
$(Q)echo $(kernelrelease) > .kernelrelease
$(Q)echo " Building kernel $(kernelrelease)"
$(Q)rm -f $@
$(Q)echo $(kernelrelease) > $@
# Things we need to do before we recursively start building the kernel
@@ -898,7 +906,7 @@ define filechk_version.h
)
endef
include/linux/version.h: $(srctree)/Makefile FORCE
include/linux/version.h: $(srctree)/Makefile .config FORCE
$(call filechk,version.h)
# ---------------------------------------------------------------------------
@@ -1301,9 +1309,10 @@ checkstack:
$(PERL) $(src)/scripts/checkstack.pl $(ARCH)
kernelrelease:
@echo $(KERNELRELEASE)
$(if $(wildcard .kernelrelease), $(Q)echo $(KERNELRELEASE), \
$(error kernelrelease not valid - run 'make *config' to update it))
kernelversion:
@echo $(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)
@echo $(KERNELVERSION)
# FIXME Should go into a make.lib or something
# ===========================================================================

30
README
View File

@@ -1,4 +1,4 @@
Linux kernel release 2.6.xx
Linux kernel release 2.6.xx <http://kernel.org>
These are the release notes for Linux version 2.6. Read them carefully,
as they tell you what this is all about, explain how to install the
@@ -6,23 +6,31 @@ kernel, and what to do if something goes wrong.
WHAT IS LINUX?
Linux is a Unix clone written from scratch by Linus Torvalds with
assistance from a loosely-knit team of hackers across the Net.
It aims towards POSIX compliance.
Linux is a clone of the operating system Unix, written from scratch by
Linus Torvalds with assistance from a loosely-knit team of hackers across
the Net. It aims towards POSIX and Single UNIX Specification compliance.
It has all the features you would expect in a modern fully-fledged
Unix, including true multitasking, virtual memory, shared libraries,
demand loading, shared copy-on-write executables, proper memory
management and TCP/IP networking.
It has all the features you would expect in a modern fully-fledged Unix,
including true multitasking, virtual memory, shared libraries, demand
loading, shared copy-on-write executables, proper memory management,
and multistack networking including IPv4 and IPv6.
It is distributed under the GNU General Public License - see the
accompanying COPYING file for more details.
ON WHAT HARDWARE DOES IT RUN?
Linux was first developed for 386/486-based PCs. These days it also
runs on ARMs, DEC Alphas, SUN Sparcs, M68000 machines (like Atari and
Amiga), MIPS and PowerPC, and others.
Although originally developed first for 32-bit x86-based PCs (386 or higher),
today Linux also runs on (at least) the Compaq Alpha AXP, Sun SPARC and
UltraSPARC, Motorola 68000, PowerPC, PowerPC64, ARM, Hitachi SuperH,
IBM S/390, MIPS, HP PA-RISC, Intel IA-64, DEC VAX, AMD x86-64, AXIS CRIS,
and Renesas M32R architectures.
Linux is easily portable to most general-purpose 32- or 64-bit architectures
as long as they have a paged memory management unit (PMMU) and a port of the
GNU C compiler (gcc) (part of The GNU Compiler Collection, GCC). Linux has
also been ported to a number of architectures without a PMMU, although
functionality is then obviously somewhat limited.
DOCUMENTATION:

View File

@@ -141,7 +141,7 @@ int show_interrupts(struct seq_file *p, void *v)
if (i < NR_IRQS) {
action = irq_desc[i].action;
if (!action)
continue;
goto out;
seq_printf(p, "%3d: %10u ", i, kstat_irqs(i));
seq_printf(p, " %s", action->name);
for (action = action->next; action; action = action->next) {
@@ -152,6 +152,7 @@ int show_interrupts(struct seq_file *p, void *v)
show_fiq_list(p, v);
seq_printf(p, "Err: %10lu\n", irq_err_count);
}
out:
return 0;
}

View File

@@ -527,7 +527,7 @@ static int ptrace_getfpregs(struct task_struct *tsk, void *ufp)
static int ptrace_setfpregs(struct task_struct *tsk, void *ufp)
{
set_stopped_child_used_math(tsk);
return copy_from_user(&task_threas_info(tsk)->fpstate, ufp,
return copy_from_user(&task_thread_info(tsk)->fpstate, ufp,
sizeof(struct user_fp)) ? -EFAULT : 0;
}

View File

@@ -37,10 +37,7 @@ CFLAGS += $(call cc-option,-mpreferred-stack-boundary=2)
# CPU-specific tuning. Anything which can be shared with UML should go here.
include $(srctree)/arch/i386/Makefile.cpu
# -mregparm=3 works ok on gcc-3.0 and later
#
cflags-$(CONFIG_REGPARM) += $(shell if [ $(call cc-version) -ge 0300 ] ; then \
echo "-mregparm=3"; fi ;)
cflags-$(CONFIG_REGPARM) += -mregparm=3
# Disable unit-at-a-time mode on pre-gcc-4.0 compilers, it makes gcc use
# a lot more stack due to the lack of sharing of stacklots:

View File

@@ -980,7 +980,7 @@ static int powernowk8_verify(struct cpufreq_policy *pol)
}
/* per CPU init entry point to the driver */
static int __init powernowk8_cpu_init(struct cpufreq_policy *pol)
static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol)
{
struct powernow_k8_data *data;
cpumask_t oldmask = CPU_MASK_ALL;
@@ -1141,7 +1141,7 @@ static struct cpufreq_driver cpufreq_amd64_driver = {
};
/* driver entry point for init */
static int __init powernowk8_init(void)
static int __cpuinit powernowk8_init(void)
{
unsigned int i, supported_cpus = 0;

View File

@@ -268,7 +268,7 @@ static void __init permanent_kmaps_init(pgd_t *pgd_base)
pkmap_page_table = pte;
}
static void __devinit free_new_highpage(struct page *page)
static void __meminit free_new_highpage(struct page *page)
{
set_page_count(page, 1);
__free_page(page);

View File

@@ -628,9 +628,11 @@ static int pfm_write_ibr_dbr(int mode, pfm_context_t *ctx, void *arg, int count,
#include "perfmon_itanium.h"
#include "perfmon_mckinley.h"
#include "perfmon_montecito.h"
#include "perfmon_generic.h"
static pmu_config_t *pmu_confs[]={
&pmu_conf_mont,
&pmu_conf_mck,
&pmu_conf_ita,
&pmu_conf_gen, /* must be last */

View File

@@ -0,0 +1,269 @@
/*
* This file contains the Montecito PMU register description tables
* and pmc checker used by perfmon.c.
*
* Copyright (c) 2005-2006 Hewlett-Packard Development Company, L.P.
* Contributed by Stephane Eranian <eranian@hpl.hp.com>
*/
static int pfm_mont_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnum, unsigned long *val, struct pt_regs *regs);
#define RDEP_MONT_ETB (RDEP(38)|RDEP(39)|RDEP(48)|RDEP(49)|RDEP(50)|RDEP(51)|RDEP(52)|RDEP(53)|RDEP(54)|\
RDEP(55)|RDEP(56)|RDEP(57)|RDEP(58)|RDEP(59)|RDEP(60)|RDEP(61)|RDEP(62)|RDEP(63))
#define RDEP_MONT_DEAR (RDEP(32)|RDEP(33)|RDEP(36))
#define RDEP_MONT_IEAR (RDEP(34)|RDEP(35))
static pfm_reg_desc_t pfm_mont_pmc_desc[PMU_MAX_PMCS]={
/* pmc0 */ { PFM_REG_CONTROL , 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc1 */ { PFM_REG_CONTROL , 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc2 */ { PFM_REG_CONTROL , 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc3 */ { PFM_REG_CONTROL , 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc4 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(4),0, 0, 0}, {0,0, 0, 0}},
/* pmc5 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(5),0, 0, 0}, {0,0, 0, 0}},
/* pmc6 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(6),0, 0, 0}, {0,0, 0, 0}},
/* pmc7 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(7),0, 0, 0}, {0,0, 0, 0}},
/* pmc8 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(8),0, 0, 0}, {0,0, 0, 0}},
/* pmc9 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(9),0, 0, 0}, {0,0, 0, 0}},
/* pmc10 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(10),0, 0, 0}, {0,0, 0, 0}},
/* pmc11 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(11),0, 0, 0}, {0,0, 0, 0}},
/* pmc12 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(12),0, 0, 0}, {0,0, 0, 0}},
/* pmc13 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(13),0, 0, 0}, {0,0, 0, 0}},
/* pmc14 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(14),0, 0, 0}, {0,0, 0, 0}},
/* pmc15 */ { PFM_REG_COUNTING, 6, 0x2000000, 0x7c7fff7f, NULL, pfm_mont_pmc_check, {RDEP(15),0, 0, 0}, {0,0, 0, 0}},
/* pmc16 */ { PFM_REG_NOTIMPL, },
/* pmc17 */ { PFM_REG_NOTIMPL, },
/* pmc18 */ { PFM_REG_NOTIMPL, },
/* pmc19 */ { PFM_REG_NOTIMPL, },
/* pmc20 */ { PFM_REG_NOTIMPL, },
/* pmc21 */ { PFM_REG_NOTIMPL, },
/* pmc22 */ { PFM_REG_NOTIMPL, },
/* pmc23 */ { PFM_REG_NOTIMPL, },
/* pmc24 */ { PFM_REG_NOTIMPL, },
/* pmc25 */ { PFM_REG_NOTIMPL, },
/* pmc26 */ { PFM_REG_NOTIMPL, },
/* pmc27 */ { PFM_REG_NOTIMPL, },
/* pmc28 */ { PFM_REG_NOTIMPL, },
/* pmc29 */ { PFM_REG_NOTIMPL, },
/* pmc30 */ { PFM_REG_NOTIMPL, },
/* pmc31 */ { PFM_REG_NOTIMPL, },
/* pmc32 */ { PFM_REG_CONFIG, 0, 0x30f01ffffffffff, 0x30f01ffffffffff, NULL, pfm_mont_pmc_check, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc33 */ { PFM_REG_CONFIG, 0, 0x0, 0x1ffffffffff, NULL, pfm_mont_pmc_check, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc34 */ { PFM_REG_CONFIG, 0, 0xf01ffffffffff, 0xf01ffffffffff, NULL, pfm_mont_pmc_check, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc35 */ { PFM_REG_CONFIG, 0, 0x0, 0x1ffffffffff, NULL, pfm_mont_pmc_check, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc36 */ { PFM_REG_CONFIG, 0, 0xfffffff0, 0xf, NULL, pfm_mont_pmc_check, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc37 */ { PFM_REG_MONITOR, 4, 0x0, 0x3fff, NULL, pfm_mont_pmc_check, {RDEP_MONT_IEAR, 0, 0, 0}, {0, 0, 0, 0}},
/* pmc38 */ { PFM_REG_CONFIG, 0, 0xdb6, 0x2492, NULL, pfm_mont_pmc_check, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc39 */ { PFM_REG_MONITOR, 6, 0x0, 0xffcf, NULL, pfm_mont_pmc_check, {RDEP_MONT_ETB,0, 0, 0}, {0,0, 0, 0}},
/* pmc40 */ { PFM_REG_MONITOR, 6, 0x2000000, 0xf01cf, NULL, pfm_mont_pmc_check, {RDEP_MONT_DEAR,0, 0, 0}, {0,0, 0, 0}},
/* pmc41 */ { PFM_REG_CONFIG, 0, 0x00002078fefefefe, 0x1e00018181818, NULL, pfm_mont_pmc_check, {0,0, 0, 0}, {0,0, 0, 0}},
/* pmc42 */ { PFM_REG_MONITOR, 6, 0x0, 0x7ff4f, NULL, pfm_mont_pmc_check, {RDEP_MONT_ETB,0, 0, 0}, {0,0, 0, 0}},
{ PFM_REG_END , 0, 0x0, -1, NULL, NULL, {0,}, {0,}}, /* end marker */
};
static pfm_reg_desc_t pfm_mont_pmd_desc[PMU_MAX_PMDS]={
/* pmd0 */ { PFM_REG_NOTIMPL, },
/* pmd1 */ { PFM_REG_NOTIMPL, },
/* pmd2 */ { PFM_REG_NOTIMPL, },
/* pmd3 */ { PFM_REG_NOTIMPL, },
/* pmd4 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(4),0, 0, 0}},
/* pmd5 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(5),0, 0, 0}},
/* pmd6 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(6),0, 0, 0}},
/* pmd7 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(7),0, 0, 0}},
/* pmd8 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(8),0, 0, 0}},
/* pmd9 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(9),0, 0, 0}},
/* pmd10 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(10),0, 0, 0}},
/* pmd11 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(11),0, 0, 0}},
/* pmd12 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(12),0, 0, 0}},
/* pmd13 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(13),0, 0, 0}},
/* pmd14 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(14),0, 0, 0}},
/* pmd15 */ { PFM_REG_COUNTING, 0, 0x0, -1, NULL, NULL, {0,0, 0, 0}, {RDEP(15),0, 0, 0}},
/* pmd16 */ { PFM_REG_NOTIMPL, },
/* pmd17 */ { PFM_REG_NOTIMPL, },
/* pmd18 */ { PFM_REG_NOTIMPL, },
/* pmd19 */ { PFM_REG_NOTIMPL, },
/* pmd20 */ { PFM_REG_NOTIMPL, },
/* pmd21 */ { PFM_REG_NOTIMPL, },
/* pmd22 */ { PFM_REG_NOTIMPL, },
/* pmd23 */ { PFM_REG_NOTIMPL, },
/* pmd24 */ { PFM_REG_NOTIMPL, },
/* pmd25 */ { PFM_REG_NOTIMPL, },
/* pmd26 */ { PFM_REG_NOTIMPL, },
/* pmd27 */ { PFM_REG_NOTIMPL, },
/* pmd28 */ { PFM_REG_NOTIMPL, },
/* pmd29 */ { PFM_REG_NOTIMPL, },
/* pmd30 */ { PFM_REG_NOTIMPL, },
/* pmd31 */ { PFM_REG_NOTIMPL, },
/* pmd32 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP(33)|RDEP(36),0, 0, 0}, {RDEP(40),0, 0, 0}},
/* pmd33 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP(32)|RDEP(36),0, 0, 0}, {RDEP(40),0, 0, 0}},
/* pmd34 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP(35),0, 0, 0}, {RDEP(37),0, 0, 0}},
/* pmd35 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP(34),0, 0, 0}, {RDEP(37),0, 0, 0}},
/* pmd36 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP(32)|RDEP(33),0, 0, 0}, {RDEP(40),0, 0, 0}},
/* pmd37 */ { PFM_REG_NOTIMPL, },
/* pmd38 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd39 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd40 */ { PFM_REG_NOTIMPL, },
/* pmd41 */ { PFM_REG_NOTIMPL, },
/* pmd42 */ { PFM_REG_NOTIMPL, },
/* pmd43 */ { PFM_REG_NOTIMPL, },
/* pmd44 */ { PFM_REG_NOTIMPL, },
/* pmd45 */ { PFM_REG_NOTIMPL, },
/* pmd46 */ { PFM_REG_NOTIMPL, },
/* pmd47 */ { PFM_REG_NOTIMPL, },
/* pmd48 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd49 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd50 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd51 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd52 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd53 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd54 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd55 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd56 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd57 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd58 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd59 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd60 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd61 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd62 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
/* pmd63 */ { PFM_REG_BUFFER, 0, 0x0, -1, NULL, NULL, {RDEP_MONT_ETB,0, 0, 0}, {RDEP(39),0, 0, 0}},
{ PFM_REG_END , 0, 0x0, -1, NULL, NULL, {0,}, {0,}}, /* end marker */
};
/*
* PMC reserved fields must have their power-up values preserved
*/
static int
pfm_mont_reserved(unsigned int cnum, unsigned long *val, struct pt_regs *regs)
{
unsigned long tmp1, tmp2, ival = *val;
/* remove reserved areas from user value */
tmp1 = ival & PMC_RSVD_MASK(cnum);
/* get reserved fields values */
tmp2 = PMC_DFL_VAL(cnum) & ~PMC_RSVD_MASK(cnum);
*val = tmp1 | tmp2;
DPRINT(("pmc[%d]=0x%lx, mask=0x%lx, reset=0x%lx, val=0x%lx\n",
cnum, ival, PMC_RSVD_MASK(cnum), PMC_DFL_VAL(cnum), *val));
return 0;
}
/*
* task can be NULL if the context is unloaded
*/
static int
pfm_mont_pmc_check(struct task_struct *task, pfm_context_t *ctx, unsigned int cnum, unsigned long *val, struct pt_regs *regs)
{
int ret = 0;
unsigned long val32 = 0, val38 = 0, val41 = 0;
unsigned long tmpval;
int check_case1 = 0;
int is_loaded;
/* first preserve the reserved fields */
pfm_mont_reserved(cnum, val, regs);
tmpval = *val;
/* sanity check */
if (ctx == NULL) return -EINVAL;
is_loaded = ctx->ctx_state == PFM_CTX_LOADED || ctx->ctx_state == PFM_CTX_MASKED;
/*
* we must clear the debug registers if pmc41 has a value which enable
* memory pipeline event constraints. In this case we need to clear the
* the debug registers if they have not yet been accessed. This is required
* to avoid picking stale state.
* PMC41 is "active" if:
* one of the pmc41.cfg_dtagXX field is different from 0x3
* AND
* at the corresponding pmc41.en_dbrpXX is set.
* AND
* ctx_fl_using_dbreg == 0 (i.e., dbr not yet used)
*/
DPRINT(("cnum=%u val=0x%lx, using_dbreg=%d loaded=%d\n", cnum, tmpval, ctx->ctx_fl_using_dbreg, is_loaded));
if (cnum == 41 && is_loaded
&& (tmpval & 0x1e00000000000) && (tmpval & 0x18181818UL) != 0x18181818UL && ctx->ctx_fl_using_dbreg == 0) {
DPRINT(("pmc[%d]=0x%lx has active pmc41 settings, clearing dbr\n", cnum, tmpval));
/* don't mix debug with perfmon */
if (task && (task->thread.flags & IA64_THREAD_DBG_VALID) != 0) return -EINVAL;
/*
* a count of 0 will mark the debug registers if:
* AND
*/
ret = pfm_write_ibr_dbr(PFM_DATA_RR, ctx, NULL, 0, regs);
if (ret) return ret;
}
/*
* we must clear the (instruction) debug registers if:
* pmc38.ig_ibrpX is 0 (enabled)
* AND
* ctx_fl_using_dbreg == 0 (i.e., dbr not yet used)
*/
if (cnum == 38 && is_loaded && ((tmpval & 0x492UL) != 0x492UL) && ctx->ctx_fl_using_dbreg == 0) {
DPRINT(("pmc38=0x%lx has active pmc38 settings, clearing ibr\n", tmpval));
/* don't mix debug with perfmon */
if (task && (task->thread.flags & IA64_THREAD_DBG_VALID) != 0) return -EINVAL;
/*
* a count of 0 will mark the debug registers as in use and also
* ensure that they are properly cleared.
*/
ret = pfm_write_ibr_dbr(PFM_CODE_RR, ctx, NULL, 0, regs);
if (ret) return ret;
}
switch(cnum) {
case 32: val32 = *val;
val38 = ctx->ctx_pmcs[38];
val41 = ctx->ctx_pmcs[41];
check_case1 = 1;
break;
case 38: val38 = *val;
val32 = ctx->ctx_pmcs[32];
val41 = ctx->ctx_pmcs[41];
check_case1 = 1;
break;
case 41: val41 = *val;
val32 = ctx->ctx_pmcs[32];
val38 = ctx->ctx_pmcs[38];
check_case1 = 1;
break;
}
/* check illegal configuration which can produce inconsistencies in tagging
* i-side events in L1D and L2 caches
*/
if (check_case1) {
ret = (((val41 >> 45) & 0xf) == 0 && ((val32>>57) & 0x1) == 0)
&& ((((val38>>1) & 0x3) == 0x2 || ((val38>>1) & 0x3) == 0)
|| (((val38>>4) & 0x3) == 0x2 || ((val38>>4) & 0x3) == 0));
if (ret) {
DPRINT(("invalid config pmc38=0x%lx pmc41=0x%lx pmc32=0x%lx\n", val38, val41, val32));
return -EINVAL;
}
}
*val = tmpval;
return 0;
}
/*
* impl_pmcs, impl_pmds are computed at runtime to minimize errors!
*/
static pmu_config_t pmu_conf_mont={
.pmu_name = "Montecito",
.pmu_family = 0x20,
.flags = PFM_PMU_IRQ_RESEND,
.ovfl_val = (1UL << 47) - 1,
.pmd_desc = pfm_mont_pmd_desc,
.pmc_desc = pfm_mont_pmc_desc,
.num_ibrs = 8,
.num_dbrs = 8,
.use_rr_dbregs = 1 /* debug register are use for range retrictions */
};

View File

@@ -635,3 +635,39 @@ mem_init (void)
ia32_mem_init();
#endif
}
#ifdef CONFIG_MEMORY_HOTPLUG
void online_page(struct page *page)
{
ClearPageReserved(page);
set_page_count(page, 1);
__free_page(page);
totalram_pages++;
num_physpages++;
}
int add_memory(u64 start, u64 size)
{
pg_data_t *pgdat;
struct zone *zone;
unsigned long start_pfn = start >> PAGE_SHIFT;
unsigned long nr_pages = size >> PAGE_SHIFT;
int ret;
pgdat = NODE_DATA(0);
zone = pgdat->node_zones + ZONE_NORMAL;
ret = __add_pages(zone, start_pfn, nr_pages);
if (ret)
printk("%s: Problem encountered in __add_pages() as ret=%d\n",
__FUNCTION__, ret);
return ret;
}
int remove_memory(u64 start, u64 size)
{
return -EINVAL;
}
#endif

View File

@@ -454,14 +454,13 @@ static int __devinit is_valid_resource(struct pci_dev *dev, int idx)
return 0;
}
static void __devinit pcibios_fixup_device_resources(struct pci_dev *dev)
static void __devinit
pcibios_fixup_resources(struct pci_dev *dev, int start, int limit)
{
struct pci_bus_region region;
int i;
int limit = (dev->hdr_type == PCI_HEADER_TYPE_NORMAL) ? \
PCI_BRIDGE_RESOURCES : PCI_NUM_RESOURCES;
for (i = 0; i < limit; i++) {
for (i = start; i < limit; i++) {
if (!dev->resource[i].flags)
continue;
region.start = dev->resource[i].start;
@@ -472,6 +471,16 @@ static void __devinit pcibios_fixup_device_resources(struct pci_dev *dev)
}
}
static void __devinit pcibios_fixup_device_resources(struct pci_dev *dev)
{
pcibios_fixup_resources(dev, 0, PCI_BRIDGE_RESOURCES);
}
static void __devinit pcibios_fixup_bridge_resources(struct pci_dev *dev)
{
pcibios_fixup_resources(dev, PCI_BRIDGE_RESOURCES, PCI_NUM_RESOURCES);
}
/*
* Called after each bus is probed, but before its children are examined.
*/
@@ -482,7 +491,7 @@ pcibios_fixup_bus (struct pci_bus *b)
if (b->self) {
pci_read_bridge_bases(b);
pcibios_fixup_device_resources(b->self);
pcibios_fixup_bridge_resources(b->self);
}
list_for_each_entry(dev, &b->devices, bus_list)
pcibios_fixup_device_resources(dev);

View File

@@ -40,8 +40,8 @@ struct sn_flush_device_common {
unsigned long sfdl_force_int_addr;
unsigned long sfdl_flush_value;
volatile unsigned long *sfdl_flush_addr;
uint32_t sfdl_persistent_busnum;
uint32_t sfdl_persistent_segment;
u32 sfdl_persistent_busnum;
u32 sfdl_persistent_segment;
struct pcibus_info *sfdl_pcibus_info;
};
@@ -56,7 +56,7 @@ struct sn_flush_device_kernel {
*/
struct sn_flush_nasid_entry {
struct sn_flush_device_kernel **widget_p; // Used as an array of wid_num
uint64_t iio_itte[8];
u64 iio_itte[8];
};
struct hubdev_info {
@@ -70,8 +70,8 @@ struct hubdev_info {
void *hdi_nodepda;
void *hdi_node_vertex;
uint32_t max_segment_number;
uint32_t max_pcibus_number;
u32 max_segment_number;
u32 max_pcibus_number;
};
extern void hubdev_init_node(nodepda_t *, cnodeid_t);

View File

@@ -3,7 +3,8 @@
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 1992-1997,2000-2004 Silicon Graphics, Inc. All Rights Reserved.
* Copyright (C) 1992-1997,2000-2006 Silicon Graphics, Inc. All Rights
* Reserved.
*/
#ifndef _ASM_IA64_SN_XTALK_XBOW_H
#define _ASM_IA64_SN_XTALK_XBOW_H
@@ -21,94 +22,94 @@
/* Register set for each xbow link */
typedef volatile struct xb_linkregs_s {
/*
/*
* we access these through synergy unswizzled space, so the address
* gets twiddled (i.e. references to 0x4 actually go to 0x0 and vv.)
* That's why we put the register first and filler second.
*/
uint32_t link_ibf;
uint32_t filler0; /* filler for proper alignment */
uint32_t link_control;
uint32_t filler1;
uint32_t link_status;
uint32_t filler2;
uint32_t link_arb_upper;
uint32_t filler3;
uint32_t link_arb_lower;
uint32_t filler4;
uint32_t link_status_clr;
uint32_t filler5;
uint32_t link_reset;
uint32_t filler6;
uint32_t link_aux_status;
uint32_t filler7;
u32 link_ibf;
u32 filler0; /* filler for proper alignment */
u32 link_control;
u32 filler1;
u32 link_status;
u32 filler2;
u32 link_arb_upper;
u32 filler3;
u32 link_arb_lower;
u32 filler4;
u32 link_status_clr;
u32 filler5;
u32 link_reset;
u32 filler6;
u32 link_aux_status;
u32 filler7;
} xb_linkregs_t;
typedef volatile struct xbow_s {
/* standard widget configuration 0x000000-0x000057 */
struct widget_cfg xb_widget; /* 0x000000 */
/* standard widget configuration 0x000000-0x000057 */
struct widget_cfg xb_widget; /* 0x000000 */
/* helper fieldnames for accessing bridge widget */
/* helper fieldnames for accessing bridge widget */
#define xb_wid_id xb_widget.w_id
#define xb_wid_stat xb_widget.w_status
#define xb_wid_err_upper xb_widget.w_err_upper_addr
#define xb_wid_err_lower xb_widget.w_err_lower_addr
#define xb_wid_control xb_widget.w_control
#define xb_wid_req_timeout xb_widget.w_req_timeout
#define xb_wid_int_upper xb_widget.w_intdest_upper_addr
#define xb_wid_int_lower xb_widget.w_intdest_lower_addr
#define xb_wid_err_cmdword xb_widget.w_err_cmd_word
#define xb_wid_llp xb_widget.w_llp_cfg
#define xb_wid_stat_clr xb_widget.w_tflush
#define xb_wid_id xb_widget.w_id
#define xb_wid_stat xb_widget.w_status
#define xb_wid_err_upper xb_widget.w_err_upper_addr
#define xb_wid_err_lower xb_widget.w_err_lower_addr
#define xb_wid_control xb_widget.w_control
#define xb_wid_req_timeout xb_widget.w_req_timeout
#define xb_wid_int_upper xb_widget.w_intdest_upper_addr
#define xb_wid_int_lower xb_widget.w_intdest_lower_addr
#define xb_wid_err_cmdword xb_widget.w_err_cmd_word
#define xb_wid_llp xb_widget.w_llp_cfg
#define xb_wid_stat_clr xb_widget.w_tflush
/*
/*
* we access these through synergy unswizzled space, so the address
* gets twiddled (i.e. references to 0x4 actually go to 0x0 and vv.)
* That's why we put the register first and filler second.
*/
/* xbow-specific widget configuration 0x000058-0x0000FF */
uint32_t xb_wid_arb_reload; /* 0x00005C */
uint32_t _pad_000058;
uint32_t xb_perf_ctr_a; /* 0x000064 */
uint32_t _pad_000060;
uint32_t xb_perf_ctr_b; /* 0x00006c */
uint32_t _pad_000068;
uint32_t xb_nic; /* 0x000074 */
uint32_t _pad_000070;
/* xbow-specific widget configuration 0x000058-0x0000FF */
u32 xb_wid_arb_reload; /* 0x00005C */
u32 _pad_000058;
u32 xb_perf_ctr_a; /* 0x000064 */
u32 _pad_000060;
u32 xb_perf_ctr_b; /* 0x00006c */
u32 _pad_000068;
u32 xb_nic; /* 0x000074 */
u32 _pad_000070;
/* Xbridge only */
uint32_t xb_w0_rst_fnc; /* 0x00007C */
uint32_t _pad_000078;
uint32_t xb_l8_rst_fnc; /* 0x000084 */
uint32_t _pad_000080;
uint32_t xb_l9_rst_fnc; /* 0x00008c */
uint32_t _pad_000088;
uint32_t xb_la_rst_fnc; /* 0x000094 */
uint32_t _pad_000090;
uint32_t xb_lb_rst_fnc; /* 0x00009c */
uint32_t _pad_000098;
uint32_t xb_lc_rst_fnc; /* 0x0000a4 */
uint32_t _pad_0000a0;
uint32_t xb_ld_rst_fnc; /* 0x0000ac */
uint32_t _pad_0000a8;
uint32_t xb_le_rst_fnc; /* 0x0000b4 */
uint32_t _pad_0000b0;
uint32_t xb_lf_rst_fnc; /* 0x0000bc */
uint32_t _pad_0000b8;
uint32_t xb_lock; /* 0x0000c4 */
uint32_t _pad_0000c0;
uint32_t xb_lock_clr; /* 0x0000cc */
uint32_t _pad_0000c8;
/* end of Xbridge only */
uint32_t _pad_0000d0[12];
/* Link Specific Registers, port 8..15 0x000100-0x000300 */
xb_linkregs_t xb_link_raw[MAX_XBOW_PORTS];
#define xb_link(p) xb_link_raw[(p) & (MAX_XBOW_PORTS - 1)]
/* Xbridge only */
u32 xb_w0_rst_fnc; /* 0x00007C */
u32 _pad_000078;
u32 xb_l8_rst_fnc; /* 0x000084 */
u32 _pad_000080;
u32 xb_l9_rst_fnc; /* 0x00008c */
u32 _pad_000088;
u32 xb_la_rst_fnc; /* 0x000094 */
u32 _pad_000090;
u32 xb_lb_rst_fnc; /* 0x00009c */
u32 _pad_000098;
u32 xb_lc_rst_fnc; /* 0x0000a4 */
u32 _pad_0000a0;
u32 xb_ld_rst_fnc; /* 0x0000ac */
u32 _pad_0000a8;
u32 xb_le_rst_fnc; /* 0x0000b4 */
u32 _pad_0000b0;
u32 xb_lf_rst_fnc; /* 0x0000bc */
u32 _pad_0000b8;
u32 xb_lock; /* 0x0000c4 */
u32 _pad_0000c0;
u32 xb_lock_clr; /* 0x0000cc */
u32 _pad_0000c8;
/* end of Xbridge only */
u32 _pad_0000d0[12];
/* Link Specific Registers, port 8..15 0x000100-0x000300 */
xb_linkregs_t xb_link_raw[MAX_XBOW_PORTS];
} xbow_t;
#define xb_link(p) xb_link_raw[(p) & (MAX_XBOW_PORTS - 1)]
#define XB_FLAGS_EXISTS 0x1 /* device exists */
#define XB_FLAGS_MASTER 0x2
#define XB_FLAGS_SLAVE 0x0
@@ -160,7 +161,7 @@ typedef volatile struct xbow_s {
/* End of Xbridge only */
/* used only in ide, but defined here within the reserved portion */
/* of the widget0 address space (before 0xf4) */
/* of the widget0 address space (before 0xf4) */
#define XBOW_WID_UNDEF 0xe4
/* xbow link register set base, legal value for x is 0x8..0xf */
@@ -179,29 +180,37 @@ typedef volatile struct xbow_s {
/* link_control(x) */
#define XB_CTRL_LINKALIVE_IE 0x80000000 /* link comes alive */
/* reserved: 0x40000000 */
/* reserved: 0x40000000 */
#define XB_CTRL_PERF_CTR_MODE_MSK 0x30000000 /* perf counter mode */
#define XB_CTRL_IBUF_LEVEL_MSK 0x0e000000 /* input packet buffer level */
#define XB_CTRL_8BIT_MODE 0x01000000 /* force link into 8 bit mode */
#define XB_CTRL_BAD_LLP_PKT 0x00800000 /* force bad LLP packet */
#define XB_CTRL_WIDGET_CR_MSK 0x007c0000 /* LLP widget credit mask */
#define XB_CTRL_WIDGET_CR_SHFT 18 /* LLP widget credit shift */
#define XB_CTRL_ILLEGAL_DST_IE 0x00020000 /* illegal destination */
#define XB_CTRL_OALLOC_IBUF_IE 0x00010000 /* overallocated input buffer */
/* reserved: 0x0000fe00 */
#define XB_CTRL_IBUF_LEVEL_MSK 0x0e000000 /* input packet buffer
level */
#define XB_CTRL_8BIT_MODE 0x01000000 /* force link into 8
bit mode */
#define XB_CTRL_BAD_LLP_PKT 0x00800000 /* force bad LLP
packet */
#define XB_CTRL_WIDGET_CR_MSK 0x007c0000 /* LLP widget credit
mask */
#define XB_CTRL_WIDGET_CR_SHFT 18 /* LLP widget credit
shift */
#define XB_CTRL_ILLEGAL_DST_IE 0x00020000 /* illegal destination
*/
#define XB_CTRL_OALLOC_IBUF_IE 0x00010000 /* overallocated input
buffer */
/* reserved: 0x0000fe00 */
#define XB_CTRL_BNDWDTH_ALLOC_IE 0x00000100 /* bandwidth alloc */
#define XB_CTRL_RCV_CNT_OFLOW_IE 0x00000080 /* rcv retry overflow */
#define XB_CTRL_XMT_CNT_OFLOW_IE 0x00000040 /* xmt retry overflow */
#define XB_CTRL_XMT_MAX_RTRY_IE 0x00000020 /* max transmit retry */
#define XB_CTRL_RCV_IE 0x00000010 /* receive */
#define XB_CTRL_XMT_RTRY_IE 0x00000008 /* transmit retry */
/* reserved: 0x00000004 */
#define XB_CTRL_MAXREQ_TOUT_IE 0x00000002 /* maximum request timeout */
/* reserved: 0x00000004 */
#define XB_CTRL_MAXREQ_TOUT_IE 0x00000002 /* maximum request
timeout */
#define XB_CTRL_SRC_TOUT_IE 0x00000001 /* source timeout */
/* link_status(x) */
#define XB_STAT_LINKALIVE XB_CTRL_LINKALIVE_IE
/* reserved: 0x7ff80000 */
/* reserved: 0x7ff80000 */
#define XB_STAT_MULTI_ERR 0x00040000 /* multi error */
#define XB_STAT_ILLEGAL_DST_ERR XB_CTRL_ILLEGAL_DST_IE
#define XB_STAT_OALLOC_IBUF_ERR XB_CTRL_OALLOC_IBUF_IE
@@ -211,7 +220,7 @@ typedef volatile struct xbow_s {
#define XB_STAT_XMT_MAX_RTRY_ERR XB_CTRL_XMT_MAX_RTRY_IE
#define XB_STAT_RCV_ERR XB_CTRL_RCV_IE
#define XB_STAT_XMT_RTRY_ERR XB_CTRL_XMT_RTRY_IE
/* reserved: 0x00000004 */
/* reserved: 0x00000004 */
#define XB_STAT_MAXREQ_TOUT_ERR XB_CTRL_MAXREQ_TOUT_IE
#define XB_STAT_SRC_TOUT_ERR XB_CTRL_SRC_TOUT_IE
@@ -222,7 +231,7 @@ typedef volatile struct xbow_s {
#define XB_AUX_LINKFAIL_RST_BAD 0x00000040
#define XB_AUX_STAT_PRESENT 0x00000020
#define XB_AUX_STAT_PORT_WIDTH 0x00000010
/* reserved: 0x0000000f */
/* reserved: 0x0000000f */
/*
* link_arb_upper/link_arb_lower(x), (reg) should be the link_arb_upper
@@ -238,7 +247,8 @@ typedef volatile struct xbow_s {
/* XBOW_WID_STAT */
#define XB_WID_STAT_LINK_INTR_SHFT (24)
#define XB_WID_STAT_LINK_INTR_MASK (0xFF << XB_WID_STAT_LINK_INTR_SHFT)
#define XB_WID_STAT_LINK_INTR(x) (0x1 << (((x)&7) + XB_WID_STAT_LINK_INTR_SHFT))
#define XB_WID_STAT_LINK_INTR(x) \
(0x1 << (((x)&7) + XB_WID_STAT_LINK_INTR_SHFT))
#define XB_WID_STAT_WIDGET0_INTR 0x00800000
#define XB_WID_STAT_SRCID_MASK 0x000003c0 /* Xbridge only */
#define XB_WID_STAT_REG_ACC_ERR 0x00000020
@@ -264,7 +274,7 @@ typedef volatile struct xbow_s {
#define XXBOW_WIDGET_PART_NUM 0xd000 /* Xbridge */
#define XBOW_WIDGET_MFGR_NUM 0x0
#define XXBOW_WIDGET_MFGR_NUM 0x0
#define PXBOW_WIDGET_PART_NUM 0xd100 /* PIC */
#define PXBOW_WIDGET_PART_NUM 0xd100 /* PIC */
#define XBOW_REV_1_0 0x1 /* xbow rev 1.0 is "1" */
#define XBOW_REV_1_1 0x2 /* xbow rev 1.1 is "2" */
@@ -279,13 +289,13 @@ typedef volatile struct xbow_s {
#define XBOW_WID_ARB_RELOAD_INT 0x3f /* GBR reload interval */
#define IS_XBRIDGE_XBOW(wid) \
(XWIDGET_PART_NUM(wid) == XXBOW_WIDGET_PART_NUM && \
XWIDGET_MFG_NUM(wid) == XXBOW_WIDGET_MFGR_NUM)
(XWIDGET_PART_NUM(wid) == XXBOW_WIDGET_PART_NUM && \
XWIDGET_MFG_NUM(wid) == XXBOW_WIDGET_MFGR_NUM)
#define IS_PIC_XBOW(wid) \
(XWIDGET_PART_NUM(wid) == PXBOW_WIDGET_PART_NUM && \
XWIDGET_MFG_NUM(wid) == XXBOW_WIDGET_MFGR_NUM)
(XWIDGET_PART_NUM(wid) == PXBOW_WIDGET_PART_NUM && \
XWIDGET_MFG_NUM(wid) == XXBOW_WIDGET_MFGR_NUM)
#define XBOW_WAR_ENABLED(pv, widid) ((1 << XWIDGET_REV_NUM(widid)) & pv)
#endif /* _ASM_IA64_SN_XTALK_XBOW_H */
#endif /* _ASM_IA64_SN_XTALK_XBOW_H */

View File

@@ -25,28 +25,28 @@
/* widget configuration registers */
struct widget_cfg{
uint32_t w_id; /* 0x04 */
uint32_t w_pad_0; /* 0x00 */
uint32_t w_status; /* 0x0c */
uint32_t w_pad_1; /* 0x08 */
uint32_t w_err_upper_addr; /* 0x14 */
uint32_t w_pad_2; /* 0x10 */
uint32_t w_err_lower_addr; /* 0x1c */
uint32_t w_pad_3; /* 0x18 */
uint32_t w_control; /* 0x24 */
uint32_t w_pad_4; /* 0x20 */
uint32_t w_req_timeout; /* 0x2c */
uint32_t w_pad_5; /* 0x28 */
uint32_t w_intdest_upper_addr; /* 0x34 */
uint32_t w_pad_6; /* 0x30 */
uint32_t w_intdest_lower_addr; /* 0x3c */
uint32_t w_pad_7; /* 0x38 */
uint32_t w_err_cmd_word; /* 0x44 */
uint32_t w_pad_8; /* 0x40 */
uint32_t w_llp_cfg; /* 0x4c */
uint32_t w_pad_9; /* 0x48 */
uint32_t w_tflush; /* 0x54 */
uint32_t w_pad_10; /* 0x50 */
u32 w_id; /* 0x04 */
u32 w_pad_0; /* 0x00 */
u32 w_status; /* 0x0c */
u32 w_pad_1; /* 0x08 */
u32 w_err_upper_addr; /* 0x14 */
u32 w_pad_2; /* 0x10 */
u32 w_err_lower_addr; /* 0x1c */
u32 w_pad_3; /* 0x18 */
u32 w_control; /* 0x24 */
u32 w_pad_4; /* 0x20 */
u32 w_req_timeout; /* 0x2c */
u32 w_pad_5; /* 0x28 */
u32 w_intdest_upper_addr; /* 0x34 */
u32 w_pad_6; /* 0x30 */
u32 w_intdest_lower_addr; /* 0x3c */
u32 w_pad_7; /* 0x38 */
u32 w_err_cmd_word; /* 0x44 */
u32 w_pad_8; /* 0x40 */
u32 w_llp_cfg; /* 0x4c */
u32 w_pad_9; /* 0x48 */
u32 w_tflush; /* 0x54 */
u32 w_pad_10; /* 0x50 */
};
/*
@@ -63,7 +63,7 @@ struct xwidget_info{
struct xwidget_hwid xwi_hwid; /* Widget Identification */
char xwi_masterxid; /* Hub's Widget Port Number */
void *xwi_hubinfo; /* Hub's provider private info */
uint64_t *xwi_hub_provider; /* prom provider functions */
u64 *xwi_hub_provider; /* prom provider functions */
void *xwi_vertex;
};

View File

@@ -132,8 +132,8 @@ static inline u64 sal_get_pcibus_info(u64 segment, u64 busnum, u64 address)
* Retrieve the pci device information given the bus and device|function number.
*/
static inline u64
sal_get_pcidev_info(u64 segment, u64 bus_number, u64 devfn, u64 pci_dev,
u64 sn_irq_info)
sal_get_pcidev_info(u64 segment, u64 bus_number, u64 devfn, u64 pci_dev,
u64 sn_irq_info)
{
struct ia64_sal_retval ret_stuff;
ret_stuff.status = 0;
@@ -141,7 +141,7 @@ sal_get_pcidev_info(u64 segment, u64 bus_number, u64 devfn, u64 pci_dev,
SAL_CALL_NOLOCK(ret_stuff,
(u64) SN_SAL_IOIF_GET_PCIDEV_INFO,
(u64) segment, (u64) bus_number, (u64) devfn,
(u64) segment, (u64) bus_number, (u64) devfn,
(u64) pci_dev,
sn_irq_info, 0, 0);
return ret_stuff.v0;
@@ -268,7 +268,7 @@ static void sn_fixup_ionodes(void)
*/
static void
sn_pci_window_fixup(struct pci_dev *dev, unsigned int count,
int64_t * pci_addrs)
s64 * pci_addrs)
{
struct pci_controller *controller = PCI_CONTROLLER(dev->bus);
unsigned int i;
@@ -328,7 +328,7 @@ void sn_pci_fixup_slot(struct pci_dev *dev)
struct pci_bus *host_pci_bus;
struct pci_dev *host_pci_dev;
struct pcidev_info *pcidev_info;
int64_t pci_addrs[PCI_ROM_RESOURCE + 1];
s64 pci_addrs[PCI_ROM_RESOURCE + 1];
struct sn_irq_info *sn_irq_info;
unsigned long size;
unsigned int bus_no, devfn;

View File

@@ -28,7 +28,7 @@ extern int sn_ioif_inited;
static struct list_head **sn_irq_lh;
static spinlock_t sn_irq_info_lock = SPIN_LOCK_UNLOCKED; /* non-IRQ lock */
static inline uint64_t sn_intr_alloc(nasid_t local_nasid, int local_widget,
static inline u64 sn_intr_alloc(nasid_t local_nasid, int local_widget,
u64 sn_irq_info,
int req_irq, nasid_t req_nasid,
int req_slice)
@@ -123,7 +123,7 @@ static void sn_set_affinity_irq(unsigned int irq, cpumask_t mask)
list_for_each_entry_safe(sn_irq_info, sn_irq_info_safe,
sn_irq_lh[irq], list) {
uint64_t bridge;
u64 bridge;
int local_widget, status;
nasid_t local_nasid;
struct sn_irq_info *new_irq_info;
@@ -134,7 +134,7 @@ static void sn_set_affinity_irq(unsigned int irq, cpumask_t mask)
break;
memcpy(new_irq_info, sn_irq_info, sizeof(struct sn_irq_info));
bridge = (uint64_t) new_irq_info->irq_bridge;
bridge = (u64) new_irq_info->irq_bridge;
if (!bridge) {
kfree(new_irq_info);
break; /* irq is not a device interrupt */
@@ -349,10 +349,10 @@ static void force_interrupt(int irq)
*/
static void sn_check_intr(int irq, struct sn_irq_info *sn_irq_info)
{
uint64_t regval;
u64 regval;
int irr_reg_num;
int irr_bit;
uint64_t irr_reg;
u64 irr_reg;
struct pcidev_info *pcidev_info;
struct pcibus_info *pcibus_info;

Some files were not shown because too many files have changed in this diff Show More