Brett Cannon
50bb7e12ec
Remove a tuple unpacking in a parameter list to remove a SyntaxWarning raised
...
while running under -3.
2008-08-02 03:15:20 +00:00
Benjamin Peterson
8456f64ce2
revert 63965 for preformance reasons
2008-06-05 23:02:33 +00:00
Benjamin Peterson
30dc7b8ce2
use the more idomatic while True
2008-06-05 22:39:34 +00:00
Amaury Forgeot d'Arc
da0c025a43
Issue2495: tokenize.untokenize did not insert space between two consecutive string literals:
...
"" "" => """", which is invalid code.
Will backport
2008-03-27 23:23:54 +00:00
Eric Smith
0aed07ad80
Added PEP 3127 support to tokenize (with tests); added PEP 3127 to NEWS.
2008-03-17 19:43:40 +00:00
Georg Brandl
14404b68d8
Fix #1679 : "0x" was taken as a valid integer literal.
...
Fixes the tokenizer, tokenize.py and int() to reject this.
Patches by Malte Helmert.
2008-01-19 19:27:05 +00:00
Christian Heimes
288e89acfc
Added bytes and b'' as aliases for str and ''
2008-01-18 18:24:07 +00:00
Raymond Hettinger
8a7e76bcfa
Add name to credits (for untokenize).
2006-12-02 02:00:39 +00:00
Jeremy Hylton
39c532c0b6
Replace dead code with an assert.
...
Now that COMMENT tokens are reliably followed by NL or NEWLINE,
there is never a need to add extra newlines in untokenize.
2006-08-23 21:26:46 +00:00
Jeremy Hylton
76467ba6d6
Bug fixes large and small for tokenize.
...
Small: Always generate a NL or NEWLINE token following
a COMMENT token. The old code did not generate an NL token if
the comment was on a line by itself.
Large: The output of untokenize() will now match the
input exactly if it is passed the full token sequence. The
old, crufty output is still generated if a limited input
sequence is provided, where limited means that it does not
include position information for tokens.
Remaining bug: There is no CONTINUATION token (\) so there is no way
for untokenize() to handle such code.
Also, expanded the number of doctests in hopes of eventually removing
the old-style tests that compare against a golden file.
Bug fix candidate for Python 2.5.1. (Sigh.)
2006-08-23 21:14:03 +00:00
Georg Brandl
2463f8f831
Make tabnanny recognize IndentationErrors raised by tokenize.
...
Add a test to test_inspect to make sure indented source
is recognized correctly. (fixes #1224621 )
2006-08-14 21:34:08 +00:00
Guido van Rossum
c259cc9c4c
Insert a safety space after numbers as well as names in untokenize().
2006-03-30 21:43:35 +00:00
Raymond Hettinger
da99d1cbfe
SF bug #1224621 : tokenize module does not detect inconsistent dedents
2005-06-21 07:43:58 +00:00
Raymond Hettinger
68c0453418
Add untokenize() function to allow full round-trip tokenization.
...
Should significantly enhance the utility of the module by supporting
the creation of tools that modify the token stream and writeback the
modified result.
2005-06-10 11:05:19 +00:00
Anthony Baxter
c2a5a63654
PEP-0318, @decorator-style. In Guido's words:
...
"@ seems the syntax that everybody can hate equally"
Implementation by Mark Russell, from SF #979728 .
2004-08-02 06:10:11 +00:00
Guido van Rossum
68468eba63
Get rid of many apply() calls.
2003-02-27 20:14:51 +00:00
Raymond Hettinger
78a7aeeb1a
SF 633560: tokenize.__all__ needs "generate_tokens"
2002-11-05 06:06:02 +00:00
Guido van Rossum
9d6897accc
Speed up the most egregious "if token in (long tuple)" cases by using
...
a dict instead. (Alas, using a Set would be slower instead of
faster.)
2002-08-24 06:54:19 +00:00
Tim Peters
8ac1495a6a
Whitespace normalization.
2002-05-23 15:15:30 +00:00
Raymond Hettinger
d1fa3db52d
Added docstrings excerpted from Python Library Reference.
...
Closes patch 556161.
2002-05-15 02:56:03 +00:00
Tim Peters
496563a514
Remove some now-obsolete generator future statements.
...
I left the email pkg alone; I'm not sure how Barry would like to handle
that.
2002-04-01 00:28:59 +00:00
Neal Norwitz
e98d16e8a4
Cleanup x so it is not left in module
2002-03-26 16:20:26 +00:00
Tim Peters
d507dab91f
SF patch #455966 : Allow leading 0 in float/imag literals.
...
Consequences for Jython still unknown (but raised on Jython-Dev).
2001-08-30 20:51:59 +00:00
Guido van Rossum
96204f5e49
Add new tokens // and //=, in support of PEP 238.
2001-08-08 05:04:07 +00:00
Fred Drake
79e75e1916
Use string.ascii_letters instead of string.letters (SF bug #226706 ).
2001-07-20 19:05:50 +00:00