The "dmapi" subtree was developed separate from and sort of wedged
into the rest of the "xfstests" code. As a result, it has a lot of
build infrastructure that's just different from the unified way used
for everything else.
This patch changes all that, making the "dmapi" subtree be a more
normal component of "xfstests" with respect to its build process.
This involves removing all the cruft needed and used by the dmapi
"configure" script, and replacing each "Makefile.am" file with a
proper "Makefile" that includes a simple set of rules that are
compatible with the broader "xfstests" build.
The result is a much cleaner, consistent build. It also deletes
a considerable amount of code.
Signed-off-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Make it so "make depend" is a generic target, like "make clean".
Each Makefile has a "depend" target that indicates whether making
dependencies means creating ".dep" or creating ".ltdep" (or, I
suppose, both, though none do that right now). Both files get
created even if there are no CFILES to scan (to ensure the target
up-to-date). The "default" target now depends on "depend" (there is
no "ltdepend" any more).
Remove the "depend" and "ltdepend" definitions from the "buildrules"
file; only the actual generated files (".dep" and ".ltdep") remain
as generic targets. The "depend' target is still defined as phony.
Do a shell trick when expanding the value of CFILES, to avoid a
problem that occurs if it is created by "make" by concatentating two
empty strings. The problem was that in that case CFILES will
contain a space, and that wasn't getting treated as empty as
desired.
Make the rule for tool/lib dependencies more generic, to reflect the
general desire that "lib" subdirectories need to be built before
things in the "tool" subdirectories.
Signed-off-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Change the top-level Makefile, to make it clearer just what makes
what, and what depends on what:
- Separate the rules for "configure" and "include/builddefs" into
two parts, each of which generate one of the files
- Get rid of the rule for include/config.h, and group it with the
one for include/builddefs (the same command creates both files)
Having done this, we find that having both "include/builddefs" and
"include/config.h" as dependencies for the default target results in
a parallel invocation of "make" spawning two concurrent attempts to
do the configure step--and that doesn't work.
Creating one of those two will result in the other getting created,
so just list one of them as a dependency for the default rule.
A couple of other small fixes:
- Get rid of the "new", "remake" and "check" dependencies for the
default rule, which serv no purpose
- Use the $(Q) convention in a few missed spots
- Stop a DMAPI-only comment from getting echoed on default build
- Delete the "
This updated version pulls in the content of a patch previously
posted separately to fix the problem with parallel builds.
Signed-off-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Eric Sandeen <sandeen@sandeen.net>
This patch fixes a few build warnings. I have built the code using
i386, x86_64, and ia64 architectures and each has ends up with
complaints of one sort or anther. This gets rid of all of them
*except* those reported by files under the "ltp" (Linux Test
Project) sub-tree.
Signed-off-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Eric Sandeen <sandeen@sandeen.net>
Tests 127 and 134 leave temp files around when they complete.
Fix (or enable) their cleanup functions to remedy this.
Signed-off-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Eric Sandeen <sandeen@sandeen.net>
Nfs tries to umount $testdir in _cleanup_testdir function. Tests 126
and 135 call the function from directory $SCRATCH_MNT that is equal
to $testdir (at least for nfs). The umount will therefore fail,
causing the test to fail due to the output mismatch.
Test 126 also does double a umount thanks to the call to _cleanup
before exit and the trap command. So remove the unnecessary call of
the _cleanup function before exit.
Signed-off-by: Boris Ranto <branto@redhat.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
Add random runtime fallocate calls to fsx (vs. the existing
preallocate file at start of run).
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Alex Elder <aelder@sgi.com>
Running the fiemap-tester with a unique random seed each time
may uncover some things missed by always using the default.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Alex Elder <aelder@sgi.com>
Chris Mason pointed out that some filesystems were not doing
the right thing on fiemap, in the face of delalloc extents.
Because test 225 ran with FIEMAP_FLAG_SYNC only, this didn't
get caught. Add a runtime option, and run it both ways.
Note that this changes defaults for fiemap-tester, so that
it no longer calls with FIEMAP_FLAG_SYNC by default, and
a new option -S is added to do so.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Alex Elder <aelder@sgi.com>
If the fiemap call returns fewer extents than asked for,
the fiemap tester gets confused. If this happens, advance,
and call fiemap again for the next offset.
XFS exposed this because if a file is all-delalloc, it was
only returning 1 mapped extent (this is probably also a buglet).
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Alex Elder <aelder@sgi.com>
Just a hint for those perusing logs that the ensuing shutdown is
intentional...
Feb 16 17:06:17 hostname godown: xfstests-induced forced shutdown of /mnt/scratch
Feb 16 17:06:17 hostname kernel: Filesystem "sdb3": xfs_log_force: error 5 returned.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
FITRIM ioctl is used on a mounted filesystem to discard (or "trim")
blocks which are not in use by the filesystem. This is useful for
solid-state drives (SSDs) and thinly-provi-sioned storage. This test
helps to verify filesystem FITRIM implementation to assure that it
does not corrupts data.
This test creates checksums of all files in xfstests directory and
run several processes which clear its working directory on SCRATCH_MNT,
then copy everything from xfstests into its working directory, create
list of files in working directory and its checksums and compare it with the
original list of checksums. Every process works in the loop so it repeat
remove->copy->check, while fstrim tool is running simultaneously.
Fstrim is just a helper tool which uses FITRIM ioctl to actually do the
filesystem discard.
I found this very useful because when the FITRIM is really buggy (thus
data-destroying) the 251 test will notice, because checksums will most
likely change.
Signed-off-by: Lukas Czerner <lczerner@redhat.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
When running test 164 against a 4k sector device, the initial file write
of 50K fails with EINVAL, since it isn't a multiple of the device sector
size. I fixed this by bumping the amount written to 52K.
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
It looks like test 091 is supposed to work on 2.4 kernels, but there's
no way it will. Checking the actual blocksize and pagesize in the
run_fsx routine, and substituting them for BSIZE and PSIZE is error
prone when the two hold the same value. This is also a problem for 4k
sector devices. It's better to pass in what we want (PSIZE or BSIZE)
and then convert that to the command line options that fsx wants in the
run_fsx routine. This gets rid of the bogus test failure in my
environment. Also, the setting of bsize for linux-2.6 was redundant, so
I got rid of it.
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
We have hit the error while running 089.
FSTYP -- ext3
PLATFORM -- Linux/x86_64 localhost 2.6.32-109.el6.x86_64
...
...
completed 50 iterations
completed 50 iterations
completed 50 iterations
-completed 50 iterations
completed 10000 iterations
directory entries:
t_mtab
Ran: 089
Failures: 089
Failed 1 of 1 tests
This is not very easily reproducible, however one can hit it
eventually when running 089 in the loop. The problem is apparently, that
the output might get lost, probably due to some stdio buffer weirdness.
This commit workaround the issue by adding an optional argument to the
t_mtab to specify output file. The t_mtab output is then appended to a
file which content is then printed to the stdout as it would if no
output file is used.
With this commit applied the problem is no longer reproducible.
Signed-off-by: Lukas Czerner <lczerner@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
allocsize is an XFS specific mount option, and hence causes the test
to fail on other filesystems. Only set the mount option on xfs
filesystems.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Josef Bacik <josef@redhat.com>
test 042 generates a worst-case fragmented filesystem and uses it to
test xfs_fsr. It uses small 4k files to generate the hole-space-hole
pattern that fragments free space badly. It is much faster to
generate the same pattern by creating a single large file and
punching holes in it. Also, instead of writing large files to
create unfragmented space, just use preallocation so we don't have
to write the data to disk.
These changes reduce the runtime of the test on a single SATA drive
from 106s to 27s.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Test 016 fails with delaylog because it measures log traffic to disk
and delaylog writes almost nothing to the log for the given test. TO
make it work, add sync calls to the work loop to cause the log to be
flushed reliably for both delaylog and nodelaylog and hence contain
the same number of log records.
As a result, the log space consumed by the test is not changed by
the delaylog option and the test passes. The test is not
significantly slowed down by the addition of the sync calls (takes
15s to run on a single SATA drive).
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
The problem was reprted here:
https://bugzilla.redhat.com/show_bug.cgi?id=626244
With the simple test case:
# mkfs.xfs -f -d agsize=16m,size=50g <dev>
# mount <dev> /mnt
# xfs_io -f -c 'resvsp 0 40G' /mnt/foo
Triggering the problem. Turn this into a new xfsqa test so that we
exercise the problem code and prevent future regressions.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
We don't have any coverage of the splice functionality provided by
the kernel in xfstests. Add a simple test that uses the sendfile
operation built into xfs_io to copy a file ensure we at least
execute the code path in xfstests.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
When running test 078 against a 4k logical block sized disk, it fails in
xfs_repair. The problem is that xfs_repair is passed the loopback
filename instead of the actual loop device. This means that it opens
the file O_DIRECT, and tries to do 512 byte aligned I/O to a 4k sector
device. The loop device, for better or for worse, will do buffered I/O,
and thus does not suffer from the same problem. So, the attached patch
sets up the loop device and passes that to xfs_repair. This resolves
the issue on my test system.
Comments are more than welcome.
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
This test really wants to test partial file-system block I/Os. Thus, if
the device has a 4K sector size, and the file system has a 4K block
size, there's really no point in running the test. In the attached
patch, I check that the fs block size is larger than the device's
logical block size, which should cover a 4k device block size with a 16k
fs block size.
I verified that the patched test does not run on my 4k sector device
with a 4k file system. I also verified that it continues to run on a
512 byte logical sector device with a 4k file system block size.
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
When running xfstests on a 4k logical sector device, I ran into a test
failure in test 198. The errors were all due to trying to do 512 byte
aligned I/O on a 4k logical sector device. The attached patch tries to
auto-detect the proper block size if no alignment is specified. If it
fails for one reason or another, it defaults to 4k alignment. This
seems to work fine in my test rig.
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
I found that overwriting existing files hides a bug
in ext4 (since fixed). Removing the files before
the test reliably reproduces it.
Signed-off-by: Eric Sandeen <sandeen@sandeen.net>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
I ran into a failure on an ext4 backport which should have
been caught by this test, but 30s wasn't long enough to
hit it reliably. So run a bit longer; it's not in the
quick group anyway.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>