sporadic crashes in multi-thread programs when several long deallocator
chains ran concurrently and involved subclasses of built-in container
types.
Because of this change, a couple extension modules compiled for 2.7.4
(those which use the trashcan mechanism, despite it being undocumented)
will not be loadable by 2.7.3 and earlier. However, extension modules
compiled for 2.7.3 and earlier will be loadable by 2.7.4.
tuples, dicts and sets on failure.
Many new handy type and comparison specific assert* methods have been added
that fail with error messages actually useful for debugging. Contributed in
by Google and completed with help from mfoord and GvR at PyCon 2009 sprints.
Discussion lives in http://bugs.python.org/issue2578.
untrackable objects are not tracked by the garbage collector. This can
reduce the size of collections and therefore the garbage collection overhead
on long-running programs, depending on their particular use of datatypes.
(trivia: this makes the "binary_trees" benchmark from the Computer Language
Shootout 40% faster)
They still remain fragile.
For example, a call to assertEqual currently does not make any allocation
(which surprised me at first).
But this can change when gc.collect also deletes the numerous "zombie frames"
attached to each function.
In cyclic gc, clear weakrefs to unreachable objects before allowing any
Python code (weakref callbacks or __del__ methods) to run.
This is a critical bugfix, affecting all versions of Python since weakrefs
were introduced. I'll backport to 2.3.
These never failed in 2.3, and the tests confirm it. They still blow up
in the 2.2 branch, despite that all the gc-vs-__del__ fixes from 2.3
have been backported (and this is expected -- 2.2 needs more work than
2.3 needed).
of PyObject_HasAttr(); the former promises never to execute
arbitrary Python code. Undid many of the changes recently made to
worm around the worst consequences of that PyObject_HasAttr() could
execute arbitrary Python code.
Compatibility is hard to discuss, because the dangerous cases are
so perverse, and much of this appears to rely on implementation
accidents.
To start with, using hasattr() to check for __del__ wasn't only
dangerous, in some cases it was wrong: if an instance of an old-
style class didn't have "__del__" in its instance dict or in any
base class dict, but a getattr hook said __del__ existed, then
hasattr() said "yes, this object has a __del__". But
instance_dealloc() ignores the possibility of getattr hooks when
looking for a __del__, so while object.__del__ succeeds, no
__del__ method is called when the object is deleted. gc was
therefore incorrect in believing that the object had a finalizer.
The new method doesn't suffer that problem (like instance_dealloc(),
_PyObject_Lookup() doesn't believe __del__ exists in that case), but
does suffer a somewhat opposite-- and even more obscure --oddity:
if an instance of an old-style class doesn't have "__del__" in its
instance dict, and a base class does have "__del__" in its dict,
and the first base class with a "__del__" associates it with a
descriptor (an object with a __get__ method), *and* if that
descriptor raises an exception when __get__ is called, then
(a) the current method believes the instance does have a __del__,
but (b) hasattr() does not believe the instance has a __del__.
While these disagree, I believe the new method is "more correct":
because the descriptor *will* be called when the object is
destructed, it can execute arbitrary Python code at the time the
object is destructed, and that's really what gc means by "has a
finalizer": not specifically a __del__ method, but more generally
the possibility of executing arbitrary Python code at object
destruction time. Code in a descriptor's __get__() executed at
destruction time can be just as problematic as code in a
__del__() executed then.
So I believe the new method is better on all counts.
Bugfix candidate, but it's unclear to me how all this differs in
the 2.2 branch (e.g., new-style and old-style classes already
took different gc paths in 2.3 before this last round of patches,
but don't in the 2.2 branch).
externally unreachable objects with finalizers, and externally unreachable
objects without finalizers reachable from such objects. This allows us
to call has_finalizer() at most once per object, and so limit the pain of
nasty getattr hooks. This fixes the failing "boom 2" example Jeremy
posted (a non-printing variant of which is now part of test_gc), via never
triggering the nasty part of its __getattr__ method.
imports e.g. test_support must do so using an absolute package name
such as "import test.test_support" or "from test import test_support".
This also updates the README in Lib/test, and gets rid of the
duplicate data dirctory in Lib/test/data (replaced by
Lib/email/test/data).
Now Tim and Jack can have at it. :)
takes much longer to run in the context of the test suite than when run in
isolation. That's because it forces a large number of full collections,
which take time proportional to the total number of gc'ed objects in the
whole system.
But since the dangerous implementation trickery that caused this test to
fail in 2.0, 2.1 and 2.2 doesn't exist in 2.3 anymore (the trashcan
mechanism stopped doing evil things when the possibility for compiling
without cyclic gc was taken away), such an expensive test is no longer
justified. This checkin leaves the test intact, but fiddles the
constants to reduce the runtime by about a factor of 5.