const char* instead of char*. The change is conceptually correct, and
indirectly fixes a compiler wng introduced when somebody else innocently
passed a const char* to this function.
object, so the "Metroworks only" section should not decref it in case
of error (the caller is responsible for decref'ing in case of error --
and does).
The C-code in fileobject.readinto(buffer) which parses
the arguments assumes that size_t is interchangeable
with int:
size_t ntodo, ndone, nnow;
if (f->f_fp == NULL)
return err_closed();
if (!PyArg_Parse(args, "w#", &ptr, &ntodo))
return NULL;
This causes a problem on Alpha / Tru64 / OSF1 v5.1
where size_t is a long and sizeof(long) != sizeof(int).
The patch I'm proposing declares ntodo as an int. An
alternative might be to redefine w# to expect size_t.
[We can't change w# because there are probably third party modules
relying on it. GvR]
The problem is that if fread() returns a short count, we attempt
another fread() the next time through the loop, and apparently glibc
clears or ignores the eof condition so the second fread() requires
another ^D to make it see the eof condition.
According to the man page (and the C std, I hope) fread() can only
return a short count on error or eof. I'm using that in the band-aid
solution to avoid calling fread() a second time after a short read.
Note that xreadlines() still has this problem: it calls
readlines(sizehint) until it gets a zero-length return. Since
xreadlines() is mostly used for reading real files, I won't worry
about this until we get a bug report.
lseek(fp, 0L, SEEK_CUR) can make a filedescriptor unusable.
This workaround is expected to last only a few weeks (until GUSI
is fixed), but without it test_email fails.
many types were subclassable but had a xxx_dealloc function that
called PyObject_DEL(self) directly instead of deferring to
self->ob_type->tp_free(self). It is permissible to set tp_free in the
type object directly to _PyObject_Del, for non-GC types, or to
_PyObject_GC_Del, for GC types. Still, PyObject_DEL was a tad faster,
so I'm fearing that our pystone rating is going down again. I'm not
sure if doing something like
void xxx_dealloc(PyObject *self)
{
if (PyXxxCheckExact(self))
PyObject_DEL(self);
else
self->ob_type->tp_free(self);
}
is any faster than always calling the else branch, so I haven't
attempted that -- however those types whose own dealloc is fancier
(int, float, unicode) do use this pattern.
no backwards compatibility to worry about, so I just pushed the
'closure' struct member to the back -- it's never used in the current
code base (I may eliminate it, but that's more work because the getter
and setter signatures would have to change.)
As examples, I added actual docstrings to the getset attributes of a
few types: file.closed, xxsubtype.spamdict.state.
compatibility, this required all places where an array of "struct
memberlist" structures was declared that is referenced from a type's
tp_members slot to change the type of the structure to PyMemberDef;
"struct memberlist" is now only used by old code that still calls
PyMember_Get/Set. The code in PyObject_GenericGetAttr/SetAttr now
calls the new APIs PyMember_GetOne/SetOne, which take a PyMemberDef
argument.
As examples, I added actual docstrings to the attributes of a few
types: file, complex, instance method, super, and xxsubtype.spamlist.
Also converted the symtable to new style getattr.
A surprising number of changes to split tp_new into tp_new and tp_init.
Turned out the older PyFile_FromFile() didn't initialize the memory it
allocated in all (error) cases, which caused new sanity asserts
elsewhere to fail left & right (and could have, e.g., caused file_dealloc
to try decrefing random addresses).
just by doing type(f) where f is any file object. This left a hole in
restricted execution mode that rexec.py can't plug by itself (although it
can plug part of it; the rest is plugged in fileobject.c now).
Subtlety on Windows: if we change test_largefile.py to use a file
> 4GB, it still fails. A debug session suggests this is because
fseek(fp, 0, 2) refuses to seek to the end of the file when the file
is > 4GB, because it uses the SetFilePointer() in 32-bit mode.
But it only fails when we seek relative to the end of the file,
because in the other seek modes only calls to fgetpos() and fsetpos()
are made, which use Get/SetFilePointer() in 64-bit mode. Solution:
#ifdef MS_WInDOWS, replace the call to fseek(fp, ...) with a call to
_lseeki64(fileno(fp), ...). Make sure to call fflush(fp) first.
(XXX Could also replace the entire branch with a call to _lseeki64().
Would that be more efficient? Certainly less generated code.)
(XXX This needs more testing. I can't actually test that it works for
files >4GB on my Win98 machine, because the filesystem here won't let
me create files >=4GB at all. Tim should test this on his Win2K
machine.)
Curious: the MS docs say stati64 etc are supported even on Win95, but
Win95 doesn't support a filesystem that allows partitions > 2 Gb.
test_largefile: This was opening its test file in text mode. I have no
idea how that worked under Win64, but it sure needs binary mode on Win98.
BTW, on Win98 test_largefile runs quickly (under a second).
I believe this works on Linux (tested both on a system with large file
support and one without it), and it may work on Solaris 2.7.
The changes are twofold:
(1) The configure script now boldly tries to set the two symbols that
are recommended (for Solaris and Linux), and then tries a test
script that does some simple seeking without writing.
(2) The _portable_{fseek,ftell} functions are a little more systematic
in how they try the different large file support options: first
try fseeko/ftello, but only if off_t is large; then try
fseek64/ftell64; then try hacking with fgetpos/fsetpos.
I'm keeping my fingers crossed. The meaning of the
HAVE_LARGEFILE_SUPPORT macro is not at all clear.
I'll see if I can get it to work on Windows as well.