Commit Graph

3139 Commits

Author SHA1 Message Date
Alan Cox
11e29e2151 libata: handle early device PIO modes correctly 2005-10-21 18:46:32 -04:00
Trond Myklebust
654b1536b0 Merge /home/trondmy/scm/kernel/git/torvalds/linux-2.6 2005-10-20 14:25:44 -07:00
Hugh Dickins
ac9b9c667c [PATCH] Fix handling spurious page fault for hugetlb region
This reverts commit 3359b54c8c and
replaces it with a cleaner version that is purely based on page table
operations, so that the synchronization between inode size and hugetlb
mappings becomes moot.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-20 09:02:07 -07:00
Jeff Garzik
323cb3ce6e Merge branch 'master' 2005-10-20 10:11:25 -04:00
Jeff Garzik
902f90735b Merge branch 'master' 2005-10-20 10:06:09 -04:00
Yasunori Goto
281dd25cdc [PATCH] swiotlb: make sure initial DMA allocations really are in DMA memory
This introduces a limit parameter to the core bootmem allocator; The new
parameter indicates that physical memory allocated by the bootmem
allocator should be within the requested limit.

We also introduce alloc_bootmem_low_pages_limit, alloc_bootmem_node_limit,
alloc_bootmem_low_pages_node_limit apis, but alloc_bootmem_low_pages_limit
is the only api used for swiotlb.

The existing alloc_bootmem_low_pages() api could instead have been
changed and made to pass right limit to the core allocator.  But that
would make the patch more intrusive for 2.6.14, as other arches use
alloc_bootmem_low_pages().  We may be done that post 2.6.14 as a
cleanup.

With this, swiotlb gets memory within 4G for both x86_64 and ia64
arches.

Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com>
Cc: Ravikiran G Thirumalai <kiran@scalex86.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-19 23:11:33 -07:00
Seth, Rohit
3359b54c8c [PATCH] Handle spurious page fault for hugetlb region
The hugetlb pages are currently pre-faulted.  At the time of mmap of
hugepages, we populate the new PTEs.  It is possible that HW has already
cached some of the unused PTEs internally.  These stale entries never
get a chance to be purged in existing control flow.

This patch extends the check in page fault code for hugepages.  Check if
a faulted address falls with in size for the hugetlb file backing it.
We return VM_FAULT_MINOR for these cases (assuming that the arch
specific page-faulting code purges the stale entry for the archs that
need it).

Signed-off-by: Rohit Seth <rohit.seth@intel.com>

[ This is apparently arguably an ia64 port bug. But the code won't
  hurt, and for now it fixes a real problem on some ia64 machines ]

Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-19 13:56:27 -07:00
J. Bruce Fields
a0857d03b2 RPCSEC_GSS: krb5 cleanup
Remove some senseless wrappers.

 Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
 Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 23:19:47 -07:00
J. Bruce Fields
00fd6e1425 RPCSEC_GSS remove all qop parameters
Not only are the qop parameters that are passed around throughout the gssapi
 unused by any currently implemented mechanism, but there appears to be some
 doubt as to whether they will ever be used.  Let's just kill them off for now.

 Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
 Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 23:19:47 -07:00
J. Bruce Fields
14ae162c24 RPCSEC_GSS: Add support for privacy to krb5 rpcsec_gss mechanism.
Add support for privacy to the krb5 rpcsec_gss mechanism.

 Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
 Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 23:19:46 -07:00
J. Bruce Fields
bfa91516b5 RPCSEC_GSS: krb5 pre-privacy cleanup
The code this was originally derived from processed wrap and mic tokens using
 the same functions.  This required some contortions, and more would be required
 with the addition of xdr_buf's, so it's better to separate out the two code
 paths.

 In preparation for adding privacy support, remove the last vestiges of the
 old wrap token code.

 Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
 Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 23:19:45 -07:00
J. Bruce Fields
24b2605bec RPCSEC_GSS: cleanup au_rslack calculation
Various xdr encode routines use au_rslack to guess where the reply argument
 will end up, so we can set up the xdr_buf to recieve data into the right place
 for zero copy.

 Currently we calculate the au_rslack estimate when we check the verifier.
 Normally this only depends on the verifier size.  In the integrity case we add
 a few bytes to allow for a length and sequence number.

 It's a bit simpler to calculate only the verifier size when we check the
 verifier, and delay the full calculation till we unwrap.

 Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
 Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 23:19:44 -07:00
J. Bruce Fields
ead5e1c26f SUNRPC: Provide a callback to allow free pages allocated during xdr encoding
For privacy, we need to allocate pages to store the encrypted data (passed
 in pages can't be used without the risk of corrupting data in the page cache).
 So we need a way to free that memory after the request has been transmitted.

 Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
 Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 23:19:43 -07:00
J. Bruce Fields
293f1eb551 SUNRPC: Add support for privacy to generic gss-api code.
Add support for privacy to generic gss-api code.  This is dead code until we
 have both a mechanism that supports privacy and code in the client or server
 that uses it.

 Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
 Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 23:19:42 -07:00
Jeff Garzik
972c26bdd6 libata: add ata_sg_is_last() helper, use it in several drivers 2005-10-18 22:14:54 -04:00
Jeff Garzik
b194b4250c Merge branch 'upstream' 2005-10-18 21:52:42 -04:00
Trond Myklebust
02a913a73b NFSv4: Eliminate nfsv4 open race...
Make NFSv4 return the fully initialized file pointer with the
 stateid that it created in the lookup w/intent.

 Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 14:20:17 -07:00
Trond Myklebust
834f2a4a15 VFS: Allow the filesystem to return a full file pointer on open intent
This is needed by NFSv4 for atomicity reasons: our open command is in
 fact a lookup+open, so we need to be able to propagate open context
 information from lookup() into the resulting struct file's
 private_data field.

 Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 14:20:16 -07:00
Trond Myklebust
06735b3454 NFSv4: Fix up handling of open_to_lock sequence ids
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 14:20:15 -07:00
Trond Myklebust
faf5f49c2d NFSv4: Make NFS clean up byte range locks asynchronously
Currently we fail to do so if the process was signalled.

 Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 14:20:15 -07:00
Trond Myklebust
9512135df1 NFSv4: Fix a potential CLOSE race
Once the state_owner and lock_owner semaphores get removed, it will be
 possible for other OPEN requests to reopen the same file if they have
 lower sequence ids than our CLOSE call.
 This patch ensures that we recheck the file state once
 nfs_wait_on_sequence() has completed waiting.

 Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 14:20:12 -07:00
Trond Myklebust
cee54fc944 NFSv4: Add functions to order RPC calls
NFSv4 file state-changing functions such as OPEN, CLOSE, LOCK,... are all
 labelled with "sequence identifiers" in order to prevent the server from
 reordering RPC requests, as this could cause its file state to
 become out of sync with the client.

 Currently the NFS client code enforces this ordering locally using
 semaphores to restrict access to structures until the RPC call is done.
 This, of course, only works with synchronous RPC calls, since the
 user process must first grab the semaphore.
 By dropping semaphores, and instead teaching the RPC engine to hold
 the RPC calls until they are ready to be sent, we can extend this
 process to work nicely with asynchronous RPC calls too.

 This patch adds a new list called "rpc_sequence" that defines the order
 of the RPC calls to be sent. We add one such list for each state_owner.
 When an RPC call is ready to be sent, it checks if it is top of the
 rpc_sequence list. If so, it proceeds. If not, it goes back to sleep,
 and loops until it hits top of the list.
 Once the RPC call has completed, it can then bump the sequence id counter,
 and remove itself from the rpc_sequence list, and then wake up the next
 sleeper.

 Note that the state_owner sequence ids and lock_owner sequence ids are
 all indexed to the same rpc_sequence list, so OPEN, LOCK,... requests
 are all ordered w.r.t. each other.

 Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 14:20:12 -07:00
Trond Myklebust
5e5ce5be6f RPC: allow call_encode() to delay transmission of an RPC call.
Currently, call_encode will cause the entire RPC call to abort if it returns
 an error. This is unnecessarily rigid, and gets in the way of attempts
 to allow the NFSv4 layer to order RPC calls that carry sequence ids.

 Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2005-10-18 14:20:11 -07:00
Albert Lee
c6a33e2464 [PATCH] libata CHS: LBA28/LBA48 optimization (revise #6)
- add lba_28_ok() and lba_48_ok() to ata.h.
     - check ending block number instead of staring block number.
     - use lba_28_ok() for CHS range check
     - LBA28/LBA48 optimization

Suggested by Mark Lord and Alan Cox.

Signed-off-by: Albert Lee <albertcc@tw.ibm.com>

=====
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
2005-10-18 17:18:16 -04:00
Albert Lee
8cbd6df1f0 [PATCH] libata CHS: calculate read/write commands and protocol on the fly (revise #6)
- merge ata_prot_to_cmd() and ata_dev_set_protocol() as
       ata_rwcmd_protocol()
     - pave road for read/write multiple support
     - remove usage of pre-cached command and protocol values and call
       ata_rwcmd_protocol() instead

Signed-off-by: Albert Lee <albertcc@tw.ibm.com>

==============
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
2005-10-18 17:16:13 -04:00