Requires --strict option during invocation:
~/linux$ scripts/checkpatch --strict foo.patch
This tests for a bad habits of mine like this:
return 0 ;
Note that it does allow a special case of a bare semicolon
for empty loops:
while (foo())
;
Signed-off-by: Eric Nelson <eric.nelson@boundarydevices.com>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Previous code was using optimizations which were developed to work well
even on narrow-word CPUs (by today's standards). But Linux runs only on
32-bit and wider CPUs. We can use that.
First: using 32x32->64 multiply and trivial 32-bit shift, we can correctly
divide by 10 much larger numbers, and thus we can print groups of 9 digits
instead of groups of 5 digits.
Next: there are two algorithms to print larger numbers. One is generic:
divide by 1000000000 and repeatedly print groups of (up to) 9 digits.
It's conceptually simple, but requires an (unsigned long long) /
1000000000 division.
Second algorithm splits 64-bit unsigned long long into 16-bit chunks,
manipulates them cleverly and generates groups of 4 decimal digits. It so
happens that it does NOT require long long division.
If long is > 32 bits, division of 64-bit values is relatively easy, and we
will use the first algorithm. If long long is > 64 bits (strange
architecture with VERY large long long), second algorithm can't be used,
and we again use the first one.
Else (if long is 32 bits and long long is 64 bits) we use second one.
And third: there is a simple optimization which takes fast path not only
for zero as was done before, but for all one-digit numbers.
In all tested cases new code is faster than old one, in many cases by 30%,
in few cases by more than 50% (for example, on x86-32, conversion of
12345678). Code growth is ~0 in 32-bit case and ~130 bytes in 64-bit
case.
This patch is based upon an original from Michal Nazarewicz.
[akpm@linux-foundation.org: checkpatch fixes]
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Cc: Douglas W Jones <jones@cs.uiowa.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The '%p' output of the kernel's vsprintf() uses spec.field_width to
determine how many digits to output based on 2 * sizeof(void*) so that all
digits of a pointer are shown. ie. a pointer will be output as
"001A2B3C" instead of "1A2B3C". However, if the '#' flag is used in the
format (%#p), then the code doesn't take into account the width of the
'0x' prefix and will end up outputing "0x1A2B3C" instead of "0x001A2B3C".
This patch reworks the "pointer()" format hook to include 2 characters for
the '0x' prefix if the '#' flag is included.
[akpm@linux-foundation.org: checkpatch fixes]
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In the comment of allocate_resource(), the explanation of parameter max
and min is not correct.
Actually, these two parameters are used to specify the range of the
resource that will be allocated, not the min/max size that will be
allocated.
Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
ULONG_MAX is often used to check for integer overflow when calculating
allocation size. While ULONG_MAX happens to work on most systems, there
is no guarantee that `size_t' must be the same size as `long'.
This patch introduces SIZE_MAX, the maximum value of `size_t', to improve
portability and readability for allocation size validation.
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Acked-by: Alex Elder <elder@dreamhost.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Pekka Enberg <penberg@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit d065bd810b ("mm: retry page fault when blocking on disk
transfer") and commit 37b23e0525 ("x86,mm: make pagefault killable")
introduced changes into the x86 pagefault handler for making the page
fault handler retryable as well as killable.
These changes reduce the mmap_sem hold time, which is crucial during OOM
killer invocation.
Port these changes to um.
Signed-off-by: Kautuk Consul <consul.kautuk@gmail.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This allocation may be large. The code is probing to see if it will
succeed and if not, it falls back to vmalloc(). We should suppress any
page-allocation failure messages when the fallback happens.
Reported-by: Dave Jones <davej@redhat.com>
Acked-by: David Howells <dhowells@redhat.com>
Cc: James Morris <jmorris@namei.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Whoops: first, I reimplemented the already-existing has_resources
without noticing; second, I got the test backwards. I did pick a better
name, though. Combine the two....
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
In the NFSv4.1 client-reboot case we're currently removing the client's
previous state in exchange_id. That's wrong--we should be waiting till
the confirming create_session.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
The comment is redundant, and if we really want dprintk's here they'd
probably be better in the common (check-slot_seqid) code.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
The client should ignore the returned sequence_id in the case where the
CONFIRMED flag is set on an exchange_id reply--and in the unconfirmed
case "1" is always the right response. So it shouldn't actually matter
what we return here.
We could continue returning 1 just to catch clients ignoring the spec
here, but I'd rather be generous. Other things equal, returning the
existing sequence_id seems more informative.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Otherwise nfsd4_set_ex_flags writes over the return flags.
Reported-by: Bryan Schumaker <bjschuma@netapp.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
This can't happen:
- cl_time is zeroed only by unhash_client_locked, which is only
ever called under both the state lock and the client lock.
- every caller of renew_client() should have looked up a
(non-expired) client and then called renew_client() all
without dropping the state lock.
- the only other caller of renew_client_locked() is
release_session_client(), which first checks under the
client_lock that the cl_time is nonzero.
So make it clear that this is a bug, not something we handle. I can't
quite bring myself to make this a BUG(), though, as there are a lot of
renew_client() callers, and returning here is probably safer than a
BUG().
We'll consider making it a BUG() after some more cleanup.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
The cases here divide into two main categories:
- if there's an uncomfirmed record with a matching verifier,
then this is a "normal", succesful case: we're either creating
a new client, or updating an existing one.
- otherwise, this is a weird case: a replay, or a server reboot.
Reordering to reflect that makes the code a bit more concise and the
logic a lot easier to understand.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Note CLID_INUSE is for the case where two clients are trying to use the
same client-provided long-form client identifiers. But what we're
looking at here is the server-returned shorthand client id--if those
clash there's a bug somewhere.
Fix the error return, pull the check out into common code, and do the
check unconditionally in all cases.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
New clients are created only by nfsd4_setclientid(), which always gives
any new client a unique clientid. The only exception is in the
"callback update" case, in which case it may create an unconfirmed
client with the same clientid as a confirmed client. In that case it
also checks that the confirmed client has the same credential.
Therefore, it is pointless for setclientid_confirm to check whether a
confirmed and unconfirmed client with the same clientid have matching
credentials--they're guaranteed to.
Instead, it should be checking whether the credential on the
setclientid_confirm matches either of those. Otherwise, it could be
anyone sending the setclientid_confirm. Granted, I can't see why anyone
would, but still it's probalby safer to check.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>