Merge branch 'for-linus' of git://git.kernel.dk/data/git/linux-2.6-block

* 'for-linus' of git://git.kernel.dk/data/git/linux-2.6-block: (63 commits)
  Fix memory leak in dm-crypt
  SPARC64: sg chaining support
  SPARC: sg chaining support
  PPC: sg chaining support
  PS3: sg chaining support
  IA64: sg chaining support
  x86-64: enable sg chaining
  x86-64: update pci-gart iommu to sg helpers
  x86-64: update nommu to sg helpers
  x86-64: update calgary iommu to sg helpers
  swiotlb: sg chaining support
  i386: enable sg chaining
  i386 dma_map_sg: convert to using sg helpers
  mmc: need to zero sglist on init
  Panic in blk_rq_map_sg() from CCISS driver
  remove sglist_len
  remove blk_queue_max_phys_segments in libata
  revert sg segment size ifdefs
  Fixup u14-34f ENABLE_SG_CHAINING
  qla1280: enable use_sg_chaining option
  ...
This commit is contained in:
Linus Torvalds
2007-10-16 10:09:16 -07:00
142 changed files with 1186 additions and 1115 deletions
+2 -2
View File
@@ -514,7 +514,7 @@ With scatterlists, you map a region gathered from several regions by:
int i, count = pci_map_sg(dev, sglist, nents, direction);
struct scatterlist *sg;
for (i = 0, sg = sglist; i < count; i++, sg++) {
for_each_sg(sglist, sg, count, i) {
hw_address[i] = sg_dma_address(sg);
hw_len[i] = sg_dma_len(sg);
}
@@ -782,5 +782,5 @@ following people:
Jay Estabrook <Jay.Estabrook@compaq.com>
Thomas Sailer <sailer@ife.ee.ethz.ch>
Andrea Arcangeli <andrea@suse.de>
Jens Axboe <axboe@suse.de>
Jens Axboe <jens.axboe@oracle.com>
David Mosberger-Tang <davidm@hpl.hp.com>
+1 -1
View File
@@ -330,7 +330,7 @@ Here is a list of some of the different kernel trees available:
- ACPI development tree, Len Brown <len.brown@intel.com>
git.kernel.org:/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6.git
- Block development tree, Jens Axboe <axboe@suse.de>
- Block development tree, Jens Axboe <jens.axboe@oracle.com>
git.kernel.org:/pub/scm/linux/kernel/git/axboe/linux-2.6-block.git
- DRM development tree, Dave Airlie <airlied@linux.ie>
+20
View File
@@ -0,0 +1,20 @@
00-INDEX
- This file
as-iosched.txt
- Anticipatory IO scheduler
barrier.txt
- I/O Barriers
biodoc.txt
- Notes on the Generic Block Layer Rewrite in Linux 2.5
capability.txt
- Generic Block Device Capability (/sys/block/<disk>/capability)
deadline-iosched.txt
- Deadline IO scheduler tunables
ioprio.txt
- Block io priorities (in CFQ scheduler)
request.txt
- The members of struct request (in include/linux/blkdev.h)
stat.txt
- Block layer statistics in /sys/block/<dev>/stat
switching-sched.txt
- Switching I/O schedulers at runtime
+13 -8
View File
@@ -20,15 +20,10 @@ actually has a head for each physical device in the logical RAID device.
However, setting the antic_expire (see tunable parameters below) produces
very similar behavior to the deadline IO scheduler.
Selecting IO schedulers
-----------------------
To choose IO schedulers at boot time, use the argument 'elevator=deadline'.
'noop', 'as' and 'cfq' (the default) are also available. IO schedulers are
assigned globally at boot time only presently. It's also possible to change
the IO scheduler for a determined device on the fly, as described in
Documentation/block/switching-sched.txt.
Refer to Documentation/block/switching-sched.txt for information on
selecting an io scheduler on a per-device basis.
Anticipatory IO scheduler Policies
----------------------------------
@@ -115,7 +110,7 @@ statistics (average think time, average seek distance) on the process
that submitted the just completed request are examined. If it seems
likely that that process will submit another request soon, and that
request is likely to be near the just completed request, then the IO
scheduler will stop dispatching more read requests for up time (antic_expire)
scheduler will stop dispatching more read requests for up to (antic_expire)
milliseconds, hoping that process will submit a new request near the one
that just completed. If such a request is made, then it is dispatched
immediately. If the antic_expire wait time expires, then the IO scheduler
@@ -165,3 +160,13 @@ The parameters are:
for big seek time devices though not a linear correspondence - most
processes have only a few ms thinktime.
In addition to the tunables above there is a read-only file named est_time
which, when read, will show:
- The probability of a task exiting without a cooperating task
submitting an anticipated IO.
- The current mean think time.
- The seek distance used to determine if an incoming IO is better.
+2 -2
View File
@@ -2,7 +2,7 @@
=====================================================
Notes Written on Jan 15, 2002:
Jens Axboe <axboe@suse.de>
Jens Axboe <jens.axboe@oracle.com>
Suparna Bhattacharya <suparna@in.ibm.com>
Last Updated May 2, 2002
@@ -21,7 +21,7 @@ Credits:
---------
2.5 bio rewrite:
Jens Axboe <axboe@suse.de>
Jens Axboe <jens.axboe@oracle.com>
Many aspects of the generic block layer redesign were driven by and evolved
over discussions, prior patches and the collective experience of several
+8 -17
View File
@@ -5,16 +5,10 @@ This little file attempts to document how the deadline io scheduler works.
In particular, it will clarify the meaning of the exposed tunables that may be
of interest to power users.
Each io queue has a set of io scheduler tunables associated with it. These
tunables control how the io scheduler works. You can find these entries
in:
/sys/block/<device>/queue/iosched
assuming that you have sysfs mounted on /sys. If you don't have sysfs mounted,
you can do so by typing:
# mount none /sys -t sysfs
Selecting IO schedulers
-----------------------
Refer to Documentation/block/switching-sched.txt for information on
selecting an io scheduler on a per-device basis.
********************************************************************************
@@ -41,14 +35,11 @@ fifo_batch
When a read request expires its deadline, we must move some requests from
the sorted io scheduler list to the block device dispatch queue. fifo_batch
controls how many requests we move, based on the cost of each request. A
request is either qualified as a seek or a stream. The io scheduler knows
the last request that was serviced by the drive (or will be serviced right
before this one). See seek_cost and stream_unit.
controls how many requests we move.
write_starved (number of dispatches)
-------------
writes_starved (number of dispatches)
--------------
When we have to move requests from the io scheduler queue to the block
device dispatch queue, we always give a preference to reads. However, we
@@ -73,6 +64,6 @@ that comes at basically 0 cost we leave that on. We simply disable the
rbtree front sector lookup when the io scheduler merge function is called.
Nov 11 2002, Jens Axboe <axboe@suse.de>
Nov 11 2002, Jens Axboe <jens.axboe@oracle.com>
+1 -1
View File
@@ -180,4 +180,4 @@ int main(int argc, char *argv[])
---> snip ionice.c tool <---
March 11 2005, Jens Axboe <axboe@suse.de>
March 11 2005, Jens Axboe <jens.axboe@oracle.com>
+1 -1
View File
@@ -1,7 +1,7 @@
struct request documentation
Jens Axboe <axboe@suse.de> 27/05/02
Jens Axboe <jens.axboe@oracle.com> 27/05/02
1.0
Index
+21
View File
@@ -1,3 +1,18 @@
To choose IO schedulers at boot time, use the argument 'elevator=deadline'.
'noop', 'as' and 'cfq' (the default) are also available. IO schedulers are
assigned globally at boot time only presently.
Each io queue has a set of io scheduler tunables associated with it. These
tunables control how the io scheduler works. You can find these entries
in:
/sys/block/<device>/queue/iosched
assuming that you have sysfs mounted on /sys. If you don't have sysfs mounted,
you can do so by typing:
# mount none /sys -t sysfs
As of the Linux 2.6.10 kernel, it is now possible to change the
IO scheduler for a given block device on the fly (thus making it possible,
for instance, to set the CFQ scheduler for the system default, but
@@ -20,3 +35,9 @@ noop anticipatory deadline [cfq]
# echo anticipatory > /sys/block/hda/queue/scheduler
# cat /sys/block/hda/queue/scheduler
noop [anticipatory] deadline cfq
Each io queue has a set of io scheduler tunables associated with it. These
tunables control how the io scheduler works. You can find these entries
in:
/sys/block/<device>/queue/iosched
+7 -7
View File
@@ -396,7 +396,7 @@ sba_dump_sg( struct ioc *ioc, struct scatterlist *startsg, int nents)
printk(KERN_DEBUG " %d : DMA %08lx/%05x CPU %p\n", nents,
startsg->dma_address, startsg->dma_length,
sba_sg_address(startsg));
startsg++;
startsg = sg_next(startsg);
}
}
@@ -409,7 +409,7 @@ sba_check_sg( struct ioc *ioc, struct scatterlist *startsg, int nents)
while (the_nents-- > 0) {
if (sba_sg_address(the_sg) == 0x0UL)
sba_dump_sg(NULL, startsg, nents);
the_sg++;
the_sg = sg_next(the_sg);
}
}
@@ -1201,7 +1201,7 @@ sba_fill_pdir(
u32 pide = startsg->dma_address & ~PIDE_FLAG;
dma_offset = (unsigned long) pide & ~iovp_mask;
startsg->dma_address = 0;
dma_sg++;
dma_sg = sg_next(dma_sg);
dma_sg->dma_address = pide | ioc->ibase;
pdirp = &(ioc->pdir_base[pide >> iovp_shift]);
n_mappings++;
@@ -1228,7 +1228,7 @@ sba_fill_pdir(
pdirp++;
} while (cnt > 0);
}
startsg++;
startsg = sg_next(startsg);
}
/* force pdir update */
wmb();
@@ -1297,7 +1297,7 @@ sba_coalesce_chunks( struct ioc *ioc,
while (--nents > 0) {
unsigned long vaddr; /* tmp */
startsg++;
startsg = sg_next(startsg);
/* PARANOID */
startsg->dma_address = startsg->dma_length = 0;
@@ -1407,7 +1407,7 @@ int sba_map_sg(struct device *dev, struct scatterlist *sglist, int nents, int di
#ifdef ALLOW_IOV_BYPASS_SG
ASSERT(to_pci_dev(dev)->dma_mask);
if (likely((ioc->dma_mask & ~to_pci_dev(dev)->dma_mask) == 0)) {
for (sg = sglist ; filled < nents ; filled++, sg++){
for_each_sg(sglist, sg, nents, filled) {
sg->dma_length = sg->length;
sg->dma_address = virt_to_phys(sba_sg_address(sg));
}
@@ -1501,7 +1501,7 @@ void sba_unmap_sg (struct device *dev, struct scatterlist *sglist, int nents, in
while (nents && sglist->dma_length) {
sba_unmap_single(dev, sglist->dma_address, sglist->dma_length, dir);
sglist++;
sglist = sg_next(sglist);
nents--;
}
+1
View File
@@ -360,6 +360,7 @@ static struct scsi_host_template driver_template = {
.max_sectors = 1024,
.cmd_per_lun = SIMSCSI_REQ_QUEUE_LEN,
.use_clustering = DISABLE_CLUSTERING,
.use_sg_chaining = ENABLE_SG_CHAINING,
};
static int __init
+6 -5
View File
@@ -218,16 +218,17 @@ EXPORT_SYMBOL(sn_dma_unmap_single);
*
* Unmap a set of streaming mode DMA translations.
*/
void sn_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
void sn_dma_unmap_sg(struct device *dev, struct scatterlist *sgl,
int nhwentries, int direction)
{
int i;
struct pci_dev *pdev = to_pci_dev(dev);
struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev);
struct scatterlist *sg;
BUG_ON(dev->bus != &pci_bus_type);
for (i = 0; i < nhwentries; i++, sg++) {
for_each_sg(sgl, sg, nhwentries, i) {
provider->dma_unmap(pdev, sg->dma_address, direction);
sg->dma_address = (dma_addr_t) NULL;
sg->dma_length = 0;
@@ -244,11 +245,11 @@ EXPORT_SYMBOL(sn_dma_unmap_sg);
*
* Maps each entry of @sg for DMA.
*/
int sn_dma_map_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
int sn_dma_map_sg(struct device *dev, struct scatterlist *sgl, int nhwentries,
int direction)
{
unsigned long phys_addr;
struct scatterlist *saved_sg = sg;
struct scatterlist *saved_sg = sgl, *sg;
struct pci_dev *pdev = to_pci_dev(dev);
struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev);
int i;
@@ -258,7 +259,7 @@ int sn_dma_map_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
/*
* Setup a DMA address for each entry in the scatterlist.
*/
for (i = 0; i < nhwentries; i++, sg++) {
for_each_sg(sgl, sg, nhwentries, i) {
phys_addr = SG_ENT_PHYS_ADDRESS(sg);
sg->dma_address = provider->dma_map(pdev,
phys_addr, sg->length,
+3 -2
View File
@@ -154,12 +154,13 @@ static void dma_direct_unmap_single(struct device *dev, dma_addr_t dma_addr,
{
}
static int dma_direct_map_sg(struct device *dev, struct scatterlist *sg,
static int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl,
int nents, enum dma_data_direction direction)
{
struct scatterlist *sg;
int i;
for (i = 0; i < nents; i++, sg++) {
for_each_sg(sgl, sg, nents, i) {
sg->dma_address = (page_to_phys(sg->page) + sg->offset) |
dma_direct_offset;
sg->dma_length = sg->length;
+6 -5
View File
@@ -87,15 +87,16 @@ static void ibmebus_unmap_single(struct device *dev,
}
static int ibmebus_map_sg(struct device *dev,
struct scatterlist *sg,
struct scatterlist *sgl,
int nents, enum dma_data_direction direction)
{
struct scatterlist *sg;
int i;
for (i = 0; i < nents; i++) {
sg[i].dma_address = (dma_addr_t)page_address(sg[i].page)
+ sg[i].offset;
sg[i].dma_length = sg[i].length;
for_each_sg(sgl, sg, nents, i) {
sg->dma_address = (dma_addr_t)page_address(sg->page)
+ sg->offset;
sg->dma_length = sg->length;
}
return nents;
+14 -9
View File
@@ -277,7 +277,7 @@ int iommu_map_sg(struct iommu_table *tbl, struct scatterlist *sglist,
dma_addr_t dma_next = 0, dma_addr;
unsigned long flags;
struct scatterlist *s, *outs, *segstart;
int outcount, incount;
int outcount, incount, i;
unsigned long handle;
BUG_ON(direction == DMA_NONE);
@@ -297,7 +297,7 @@ int iommu_map_sg(struct iommu_table *tbl, struct scatterlist *sglist,
spin_lock_irqsave(&(tbl->it_lock), flags);
for (s = outs; nelems; nelems--, s++) {
for_each_sg(sglist, s, nelems, i) {
unsigned long vaddr, npages, entry, slen;
slen = s->length;
@@ -341,7 +341,8 @@ int iommu_map_sg(struct iommu_table *tbl, struct scatterlist *sglist,
if (novmerge || (dma_addr != dma_next)) {
/* Can't merge: create a new segment */
segstart = s;
outcount++; outs++;
outcount++;
outs = sg_next(outs);
DBG(" can't merge, new segment.\n");
} else {
outs->dma_length += s->length;
@@ -374,7 +375,7 @@ int iommu_map_sg(struct iommu_table *tbl, struct scatterlist *sglist,
* next entry of the sglist if we didn't fill the list completely
*/
if (outcount < incount) {
outs++;
outs = sg_next(outs);
outs->dma_address = DMA_ERROR_CODE;
outs->dma_length = 0;
}
@@ -385,7 +386,7 @@ int iommu_map_sg(struct iommu_table *tbl, struct scatterlist *sglist,
return outcount;
failure:
for (s = &sglist[0]; s <= outs; s++) {
for_each_sg(sglist, s, nelems, i) {
if (s->dma_length != 0) {
unsigned long vaddr, npages;
@@ -395,6 +396,8 @@ int iommu_map_sg(struct iommu_table *tbl, struct scatterlist *sglist,
s->dma_address = DMA_ERROR_CODE;
s->dma_length = 0;
}
if (s == outs)
break;
}
spin_unlock_irqrestore(&(tbl->it_lock), flags);
return 0;
@@ -404,6 +407,7 @@ int iommu_map_sg(struct iommu_table *tbl, struct scatterlist *sglist,
void iommu_unmap_sg(struct iommu_table *tbl, struct scatterlist *sglist,
int nelems, enum dma_data_direction direction)
{
struct scatterlist *sg;
unsigned long flags;
BUG_ON(direction == DMA_NONE);
@@ -413,15 +417,16 @@ void iommu_unmap_sg(struct iommu_table *tbl, struct scatterlist *sglist,
spin_lock_irqsave(&(tbl->it_lock), flags);
sg = sglist;
while (nelems--) {
unsigned int npages;
dma_addr_t dma_handle = sglist->dma_address;
dma_addr_t dma_handle = sg->dma_address;
if (sglist->dma_length == 0)
if (sg->dma_length == 0)
break;
npages = iommu_num_pages(dma_handle,sglist->dma_length);
npages = iommu_num_pages(dma_handle, sg->dma_length);
__iommu_free(tbl, dma_handle, npages);
sglist++;
sg = sg_next(sg);
}
/* Flush/invalidate TLBs if necessary. As for iommu_free(), we
+4 -3
View File
@@ -616,17 +616,18 @@ static void ps3_unmap_single(struct device *_dev, dma_addr_t dma_addr,
}
}
static int ps3_sb_map_sg(struct device *_dev, struct scatterlist *sg, int nents,
enum dma_data_direction direction)
static int ps3_sb_map_sg(struct device *_dev, struct scatterlist *sgl,
int nents, enum dma_data_direction direction)
{
#if defined(CONFIG_PS3_DYNAMIC_DMA)
BUG_ON("do");
return -EPERM;
#else
struct ps3_system_bus_device *dev = ps3_dev_to_system_bus_dev(_dev);
struct scatterlist *sg;
int i;
for (i = 0; i < nents; i++, sg++) {
for_each_sg(sgl, sg, nents, i) {
int result = ps3_dma_map(dev->d_region,
page_to_phys(sg->page) + sg->offset, sg->length,
&sg->dma_address, 0);
+13 -12
View File
@@ -35,6 +35,7 @@
#include <linux/slab.h>
#include <linux/pci.h> /* struct pci_dev */
#include <linux/proc_fs.h>
#include <linux/scatterlist.h>
#include <asm/io.h>
#include <asm/vaddrs.h>
@@ -717,19 +718,19 @@ void pci_unmap_page(struct pci_dev *hwdev,
* Device ownership issues as mentioned above for pci_map_single are
* the same here.
*/
int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents,
int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sgl, int nents,
int direction)
{
struct scatterlist *sg;
int n;
BUG_ON(direction == PCI_DMA_NONE);
/* IIep is write-through, not flushing. */
for (n = 0; n < nents; n++) {
for_each_sg(sgl, sg, nents, n) {
BUG_ON(page_address(sg->page) == NULL);
sg->dvma_address =
virt_to_phys(page_address(sg->page)) + sg->offset;
sg->dvma_length = sg->length;
sg++;
}
return nents;
}
@@ -738,19 +739,19 @@ int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents,
* Again, cpu read rules concerning calls here are the same as for
* pci_unmap_single() above.
*/
void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents,
void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sgl, int nents,
int direction)
{
struct scatterlist *sg;
int n;
BUG_ON(direction == PCI_DMA_NONE);
if (direction != PCI_DMA_TODEVICE) {
for (n = 0; n < nents; n++) {
for_each_sg(sgl, sg, nents, n) {
BUG_ON(page_address(sg->page) == NULL);
mmu_inval_dma_area(
(unsigned long) page_address(sg->page),
(sg->length + PAGE_SIZE-1) & PAGE_MASK);
sg++;
}
}
}
@@ -789,34 +790,34 @@ void pci_dma_sync_single_for_device(struct pci_dev *hwdev, dma_addr_t ba, size_t
* The same as pci_dma_sync_single_* but for a scatter-gather list,
* same rules and usage.
*/
void pci_dma_sync_sg_for_cpu(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction)
void pci_dma_sync_sg_for_cpu(struct pci_dev *hwdev, struct scatterlist *sgl, int nents, int direction)
{
struct scatterlist *sg;
int n;
BUG_ON(direction == PCI_DMA_NONE);
if (direction != PCI_DMA_TODEVICE) {
for (n = 0; n < nents; n++) {
for_each_sg(sgl, sg, nents, n) {
BUG_ON(page_address(sg->page) == NULL);
mmu_inval_dma_area(
(unsigned long) page_address(sg->page),
(sg->length + PAGE_SIZE-1) & PAGE_MASK);
sg++;
}
}
}
void pci_dma_sync_sg_for_device(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction)
void pci_dma_sync_sg_for_device(struct pci_dev *hwdev, struct scatterlist *sgl, int nents, int direction)
{
struct scatterlist *sg;
int n;
BUG_ON(direction == PCI_DMA_NONE);
if (direction != PCI_DMA_TODEVICE) {
for (n = 0; n < nents; n++) {
for_each_sg(sgl, sg, nents, n) {
BUG_ON(page_address(sg->page) == NULL);
mmu_inval_dma_area(
(unsigned long) page_address(sg->page),
(sg->length + PAGE_SIZE-1) & PAGE_MASK);
sg++;
}
}
}
+7 -5
View File
@@ -11,8 +11,8 @@
#include <linux/mm.h>
#include <linux/highmem.h> /* pte_offset_map => kmap_atomic */
#include <linux/bitops.h>
#include <linux/scatterlist.h>
#include <asm/scatterlist.h>
#include <asm/pgalloc.h>
#include <asm/pgtable.h>
#include <asm/sbus.h>
@@ -144,8 +144,9 @@ static void iounit_get_scsi_sgl(struct scatterlist *sg, int sz, struct sbus_bus
spin_lock_irqsave(&iounit->lock, flags);
while (sz != 0) {
--sz;
sg[sz].dvma_address = iounit_get_area(iounit, (unsigned long)page_address(sg[sz].page) + sg[sz].offset, sg[sz].length);
sg[sz].dvma_length = sg[sz].length;
sg->dvma_address = iounit_get_area(iounit, (unsigned long)page_address(sg->page) + sg->offset, sg->length);
sg->dvma_length = sg->length;
sg = sg_next(sg);
}
spin_unlock_irqrestore(&iounit->lock, flags);
}
@@ -173,11 +174,12 @@ static void iounit_release_scsi_sgl(struct scatterlist *sg, int sz, struct sbus_
spin_lock_irqsave(&iounit->lock, flags);
while (sz != 0) {
--sz;
len = ((sg[sz].dvma_address & ~PAGE_MASK) + sg[sz].length + (PAGE_SIZE-1)) >> PAGE_SHIFT;
vaddr = (sg[sz].dvma_address - IOUNIT_DMA_BASE) >> PAGE_SHIFT;
len = ((sg->dvma_address & ~PAGE_MASK) + sg->length + (PAGE_SIZE-1)) >> PAGE_SHIFT;
vaddr = (sg->dvma_address - IOUNIT_DMA_BASE) >> PAGE_SHIFT;
IOD(("iounit_release %08lx-%08lx\n", (long)vaddr, (long)len+vaddr));
for (len += vaddr; vaddr < len; vaddr++)
clear_bit(vaddr, iounit->bmap);
sg = sg_next(sg);
}
spin_unlock_irqrestore(&iounit->lock, flags);
}
+5 -5
View File
@@ -12,8 +12,8 @@
#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/highmem.h> /* pte_offset_map => kmap_atomic */
#include <linux/scatterlist.h>
#include <asm/scatterlist.h>
#include <asm/pgalloc.h>
#include <asm/pgtable.h>
#include <asm/sbus.h>
@@ -240,7 +240,7 @@ static void iommu_get_scsi_sgl_noflush(struct scatterlist *sg, int sz, struct sb
n = (sg->length + sg->offset + PAGE_SIZE-1) >> PAGE_SHIFT;
sg->dvma_address = iommu_get_one(sg->page, n, sbus) + sg->offset;
sg->dvma_length = (__u32) sg->length;
sg++;
sg = sg_next(sg);
}
}
@@ -254,7 +254,7 @@ static void iommu_get_scsi_sgl_gflush(struct scatterlist *sg, int sz, struct sbu
n = (sg->length + sg->offset + PAGE_SIZE-1) >> PAGE_SHIFT;
sg->dvma_address = iommu_get_one(sg->page, n, sbus) + sg->offset;
sg->dvma_length = (__u32) sg->length;
sg++;
sg = sg_next(sg);
}
}
@@ -285,7 +285,7 @@ static void iommu_get_scsi_sgl_pflush(struct scatterlist *sg, int sz, struct sbu
sg->dvma_address = iommu_get_one(sg->page, n, sbus) + sg->offset;
sg->dvma_length = (__u32) sg->length;
sg++;
sg = sg_next(sg);
}
}
@@ -325,7 +325,7 @@ static void iommu_release_scsi_sgl(struct scatterlist *sg, int sz, struct sbus_b
n = (sg->length + sg->offset + PAGE_SIZE-1) >> PAGE_SHIFT;
iommu_release_one(sg->dvma_address & PAGE_MASK, n, sbus);
sg->dvma_address = 0x21212121;
sg++;
sg = sg_next(sg);
}
}
+6 -4
View File
@@ -17,8 +17,8 @@
#include <linux/highmem.h>
#include <linux/fs.h>
#include <linux/seq_file.h>
#include <linux/scatterlist.h>
#include <asm/scatterlist.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/pgtable.h>
@@ -1228,8 +1228,9 @@ static void sun4c_get_scsi_sgl(struct scatterlist *sg, int sz, struct sbus_bus *
{
while (sz != 0) {
--sz;
sg[sz].dvma_address = (__u32)sun4c_lockarea(page_address(sg[sz].page) + sg[sz].offset, sg[sz].length);
sg[sz].dvma_length = sg[sz].length;
sg->dvma_address = (__u32)sun4c_lockarea(page_address(sg->page) + sg->offset, sg->length);
sg->dvma_length = sg->length;
sg = sg_next(sg);
}
}
@@ -1244,7 +1245,8 @@ static void sun4c_release_scsi_sgl(struct scatterlist *sg, int sz, struct sbus_b
{
while (sz != 0) {
--sz;
sun4c_unlockarea((char *)sg[sz].dvma_address, sg[sz].length);
sun4c_unlockarea((char *)sg->dvma_address, sg->length);
sg = sg_next(sg);
}
}

Some files were not shown because too many files have changed in this diff Show More