It looks like test 091 is supposed to work on 2.4 kernels, but there's
no way it will. Checking the actual blocksize and pagesize in the
run_fsx routine, and substituting them for BSIZE and PSIZE is error
prone when the two hold the same value. This is also a problem for 4k
sector devices. It's better to pass in what we want (PSIZE or BSIZE)
and then convert that to the command line options that fsx wants in the
run_fsx routine. This gets rid of the bogus test failure in my
environment. Also, the setting of bsize for linux-2.6 was redundant, so
I got rid of it.
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
We have hit the error while running 089.
FSTYP -- ext3
PLATFORM -- Linux/x86_64 localhost 2.6.32-109.el6.x86_64
...
...
completed 50 iterations
completed 50 iterations
completed 50 iterations
-completed 50 iterations
completed 10000 iterations
directory entries:
t_mtab
Ran: 089
Failures: 089
Failed 1 of 1 tests
This is not very easily reproducible, however one can hit it
eventually when running 089 in the loop. The problem is apparently, that
the output might get lost, probably due to some stdio buffer weirdness.
This commit workaround the issue by adding an optional argument to the
t_mtab to specify output file. The t_mtab output is then appended to a
file which content is then printed to the stdout as it would if no
output file is used.
With this commit applied the problem is no longer reproducible.
Signed-off-by: Lukas Czerner <lczerner@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
allocsize is an XFS specific mount option, and hence causes the test
to fail on other filesystems. Only set the mount option on xfs
filesystems.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Josef Bacik <josef@redhat.com>
test 042 generates a worst-case fragmented filesystem and uses it to
test xfs_fsr. It uses small 4k files to generate the hole-space-hole
pattern that fragments free space badly. It is much faster to
generate the same pattern by creating a single large file and
punching holes in it. Also, instead of writing large files to
create unfragmented space, just use preallocation so we don't have
to write the data to disk.
These changes reduce the runtime of the test on a single SATA drive
from 106s to 27s.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Test 016 fails with delaylog because it measures log traffic to disk
and delaylog writes almost nothing to the log for the given test. TO
make it work, add sync calls to the work loop to cause the log to be
flushed reliably for both delaylog and nodelaylog and hence contain
the same number of log records.
As a result, the log space consumed by the test is not changed by
the delaylog option and the test passes. The test is not
significantly slowed down by the addition of the sync calls (takes
15s to run on a single SATA drive).
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
The problem was reprted here:
https://bugzilla.redhat.com/show_bug.cgi?id=626244
With the simple test case:
# mkfs.xfs -f -d agsize=16m,size=50g <dev>
# mount <dev> /mnt
# xfs_io -f -c 'resvsp 0 40G' /mnt/foo
Triggering the problem. Turn this into a new xfsqa test so that we
exercise the problem code and prevent future regressions.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
We don't have any coverage of the splice functionality provided by
the kernel in xfstests. Add a simple test that uses the sendfile
operation built into xfs_io to copy a file ensure we at least
execute the code path in xfstests.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
When running test 078 against a 4k logical block sized disk, it fails in
xfs_repair. The problem is that xfs_repair is passed the loopback
filename instead of the actual loop device. This means that it opens
the file O_DIRECT, and tries to do 512 byte aligned I/O to a 4k sector
device. The loop device, for better or for worse, will do buffered I/O,
and thus does not suffer from the same problem. So, the attached patch
sets up the loop device and passes that to xfs_repair. This resolves
the issue on my test system.
Comments are more than welcome.
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
This test really wants to test partial file-system block I/Os. Thus, if
the device has a 4K sector size, and the file system has a 4K block
size, there's really no point in running the test. In the attached
patch, I check that the fs block size is larger than the device's
logical block size, which should cover a 4k device block size with a 16k
fs block size.
I verified that the patched test does not run on my 4k sector device
with a 4k file system. I also verified that it continues to run on a
512 byte logical sector device with a 4k file system block size.
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
When running xfstests on a 4k logical sector device, I ran into a test
failure in test 198. The errors were all due to trying to do 512 byte
aligned I/O on a 4k logical sector device. The attached patch tries to
auto-detect the proper block size if no alignment is specified. If it
fails for one reason or another, it defaults to 4k alignment. This
seems to work fine in my test rig.
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
I found that overwriting existing files hides a bug
in ext4 (since fixed). Removing the files before
the test reliably reproduces it.
Signed-off-by: Eric Sandeen <sandeen@sandeen.net>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
I ran into a failure on an ext4 backport which should have
been caught by this test, but 30s wasn't long enough to
hit it reliably. So run a bit longer; it's not in the
quick group anyway.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Christoph reported that test 014 went from 7s to 870s runtime with
the dynamic speculative delayed allocation changes. Analysis of test
014 shows that it does this loop 10,000 times:
pwrite(random offset, 512 bytes);
truncate(random offset);
Where the random offset is anywhere in a 256MB file. Hence on
average every second write or truncate extends the file.
If large preallocatione beyond EOF sizes are used each extending
write or truncate will zero large numbers of blocks - tens of
megabytes at a time. The result is that instead of only writing
~10,000 blocks, we write hundreds to thousands of megabytes of zeros
to the file and that is where the difference in runtime is coming
from.
The IO pattern that this test is using does not reflect a common (or
sane!) real-world application IO pattern, so it is really just
exercising the allocation and truncation paths in XFS. To do this,
we don't need large amounts of preallocation beyond EOF that just
slows down the operation, so execute the test with a fixed, small
preallocation size that reflects the previous default.
By specifying the preallocation size via the allocsize mount option,
this also overrides any custom allocsize option provided for the
test, so the test will not revert to extremely long runtimes when
allocsize is provided on the command line.
However, to ensure that we do actually get some coverage of the
zeroing paths, set the allocsize mount option to 64k - this
exercises the EOF zeroing paths, but does not affect the runtime of
the test.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Many of the "count-the-holes" tests (008, 012, etc) do writes that extend the
file and hence allocation patterns are dependent on speculative allocation
beyond EOF behaviour. Hence if we change that behaviour, these tests all fail
because there is a different pattern of holes.
Make the tests independent of EOF preallocation behaviour by first truncating
the file to the size the test is defined to use. This prevents speculative
prealocation from occurring, and hence changes in such behaviour will not cause
the tests to fail.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
When compiling "fiemap-tester.c" in my environment, I am
getting complaints at the first reference to "uint64_t"
in <linux/fs.h>. This simple patch resolves that.
Signed-off-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
xfs_quota can output different amounts of spaces when it is trying to align
its output. This can cause output mismatch on several systems in test case 108.
Filter all the consecutive spaces in xfs_quota output to just one space,
making the test case independent of the alignment.
Signed-off-by: Boris Ranto <branto@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Test case 237 checks for setfacl output. The setfacl can use both
relative address or absolute address for filename.
Following patch ignores the unnecessary part of absolute address and
therefore the test case can pass on systems that output absolute
address.
Signed-off-by: Boris Ranto <branto@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Test cases 051 and 067 use getfacl with option -n. This works well on newer systems but older acl package know only its longer version: --numeric.
Signed-off-by: Boris Ranto <branto@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Test 245 only checks to see if the rename returned EEXIST, but according to the
rename(2) manpage, ENOTEMPTY is also a valid result, which is in fact what Btrfs
returns. So just filter the output for ENOTEMPTY so that either EEXIST or
ENOTEMPTY will pass the test. It's not pretty I know, but I couldn't really
figure out a good way to get an either/or output to compare. With this fix
Btrfs now passes 245.
Signed-off-by: Josef Bacik <josef@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
A customer reported a problem:
If a process is using mmap to write to a file on an
ext4 filesystem while another process is using direct
I/O to write to the same file the first thread may
receive a SIGBUS during a page fault.
A SIGBUS occurs if the page fault tries to access a
page that is entirely beyond the end of the file but
in this test case that should not be happening.
Signed-off-by: Lachlan McIlroy <lmcilory@redhat.com>
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Test case 223 constantly fails because the variable carrying mkfs
options is not being reinitialized.
Test calls function _scratch_mkfs_geom repeatedly in for loop without
cleaning the MKFS_OPTIONS variable. Since _scratch_mkfs_geom only
appends options to the variable, MKFS_OPTIONS looks like this in 5th
iteration:
MKFS_OPTIONS="-bsize=4096-b size=4096 -d su=8192,sw=4-b size=4096 -d
su=16384,sw=4-b size=4096 -d su=32768,sw=4-b size=4096 -d
su=65536,sw=4-b size=4096 -d su=131072,sw=4"
It is also easy to see that _scratch_mkfs_geom does not append leading
space when it appends the variable.
Following patch fixes the issue for me and based on my testing does not
break any other test case:
Signed-off-by: Boris Ranto <branto@redhat.com>
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Eric Sandeen <sandeen@redhat.com>
Per Posix renames over non-empty directories should fail, but hfsplus used to
allow this (and corrupt the filesystem while doing so).
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>