Jason R. Coombs
eabfe8cc0e
Issue #20387 : Backport fix from Python 3.4
2015-06-28 13:05:19 -04:00
Terry Jan Reedy
bd7cf3ade3
Issue #9974 : When untokenizing, use row info to insert backslash+newline.
...
Original patches by A. Kuchling and G. Rees (#12691 ).
2014-02-23 23:32:59 -05:00
Terry Jan Reedy
8ab7cba924
whitespace
2014-02-17 23:16:26 -05:00
Terry Jan Reedy
6858f00dab
Issue #8478 : Untokenizer.compat now processes first token from iterator input.
...
Patch based on lines from Georg Brandl, Eric Snow, and Gareth Rees.
2014-02-17 23:12:07 -05:00
Terry Jan Reedy
7751a34400
Untokenize: An logically incorrect assert tested user input validity.
...
Replace it with correct logic that raises ValueError for bad input.
Issues #8478 and #12691 reported the incorrect logic.
Add an Untokenize test case and an initial test method.
2014-02-17 16:45:38 -05:00
Ezio Melotti
2612679ddc
#19620 : Fix typo in docstring (noticed by Christopher Welborn).
2013-11-25 05:14:51 +02:00
Ezio Melotti
7d24b1698a
#16152 : fix tokenize to ignore whitespace at the end of the code when no newline is found. Patch by Ned Batchelder.
2012-11-03 17:30:51 +02:00
Meador Inge
43f42fc3cb
Issue #15054 : Fix incorrect tokenization of 'b' and 'br' string literals.
...
Patch by Serhiy Storchaka.
2012-06-16 21:05:50 -05:00
Benjamin Peterson
ca2d2529ce
some cleanups
2009-10-15 03:05:39 +00:00
Benjamin Peterson
447dc15658
use floor division and add a test that exercises the tabsize codepath
2009-10-15 01:49:37 +00:00
Benjamin Peterson
e537adfd08
pep8ify if blocks
2009-10-15 01:47:28 +00:00
Brett Cannon
50bb7e12ec
Remove a tuple unpacking in a parameter list to remove a SyntaxWarning raised
...
while running under -3.
2008-08-02 03:15:20 +00:00
Benjamin Peterson
8456f64ce2
revert 63965 for preformance reasons
2008-06-05 23:02:33 +00:00
Benjamin Peterson
30dc7b8ce2
use the more idomatic while True
2008-06-05 22:39:34 +00:00
Amaury Forgeot d'Arc
da0c025a43
Issue2495: tokenize.untokenize did not insert space between two consecutive string literals:
...
"" "" => """", which is invalid code.
Will backport
2008-03-27 23:23:54 +00:00
Eric Smith
0aed07ad80
Added PEP 3127 support to tokenize (with tests); added PEP 3127 to NEWS.
2008-03-17 19:43:40 +00:00
Georg Brandl
14404b68d8
Fix #1679 : "0x" was taken as a valid integer literal.
...
Fixes the tokenizer, tokenize.py and int() to reject this.
Patches by Malte Helmert.
2008-01-19 19:27:05 +00:00
Christian Heimes
288e89acfc
Added bytes and b'' as aliases for str and ''
2008-01-18 18:24:07 +00:00
Raymond Hettinger
8a7e76bcfa
Add name to credits (for untokenize).
2006-12-02 02:00:39 +00:00
Jeremy Hylton
39c532c0b6
Replace dead code with an assert.
...
Now that COMMENT tokens are reliably followed by NL or NEWLINE,
there is never a need to add extra newlines in untokenize.
2006-08-23 21:26:46 +00:00
Jeremy Hylton
76467ba6d6
Bug fixes large and small for tokenize.
...
Small: Always generate a NL or NEWLINE token following
a COMMENT token. The old code did not generate an NL token if
the comment was on a line by itself.
Large: The output of untokenize() will now match the
input exactly if it is passed the full token sequence. The
old, crufty output is still generated if a limited input
sequence is provided, where limited means that it does not
include position information for tokens.
Remaining bug: There is no CONTINUATION token (\) so there is no way
for untokenize() to handle such code.
Also, expanded the number of doctests in hopes of eventually removing
the old-style tests that compare against a golden file.
Bug fix candidate for Python 2.5.1. (Sigh.)
2006-08-23 21:14:03 +00:00
Georg Brandl
2463f8f831
Make tabnanny recognize IndentationErrors raised by tokenize.
...
Add a test to test_inspect to make sure indented source
is recognized correctly. (fixes #1224621 )
2006-08-14 21:34:08 +00:00
Guido van Rossum
c259cc9c4c
Insert a safety space after numbers as well as names in untokenize().
2006-03-30 21:43:35 +00:00
Raymond Hettinger
da99d1cbfe
SF bug #1224621 : tokenize module does not detect inconsistent dedents
2005-06-21 07:43:58 +00:00
Raymond Hettinger
68c0453418
Add untokenize() function to allow full round-trip tokenization.
...
Should significantly enhance the utility of the module by supporting
the creation of tools that modify the token stream and writeback the
modified result.
2005-06-10 11:05:19 +00:00