Fsstress exec behaviour is not completely determinated in case of
low resources mode due to ENOMEM, ENOSPC, etc. In some places we
call stat(2). This information may be halpfull for future
investigations purposes. Let's dump stat info where possible.
Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
This will verify the various raid features in btrfs and device
replacement functionality.
Signed-off-by: Anand Jain <Anand.Jain@oracle.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Create snapshots in various ways, modify the data around the block and
file boundaries and verify the data integrity.
Signed-off-by: Anand Jain <Anand.Jain@oracle.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
SCRATCH_DEV takes single disk as the scratch place for testing. New
SCRATCH_DEV_POOL can used to specify multiple disks for the scratch
btrfs filesystem.
Using SCRATCH_DEV and or SCRATCH_DEV_POOL will follow the following logic.
btrfs FS OR any FS
SCRATCH_DEV_POOL is unset and SCRATCH_DEV is set
. test-case with _require_scratch_dev_pool will not run
. test-case without _require_scratch_dev_pool will run
SCRATCH_DEV_POOL is set and SCRATCH_DEV is unset
. test-case with _require_scratch_dev_pool
- runs only if FSTYP=btrfs
. test-case without _require_scratch_dev_pool will run using first
dev in the SCRATCH_DEV_POOL as a SCRATCH_DEV
- if FSTYP=btrfs it includes SCRATCH_DEV_POOL disks to the FS
- if FSTYP=non-btrfs SCRATCH_DEV_POOL is ignored
SCRATCH_DEV_POOL is set and SCRATCH_DEV is set
. reports error in the config
SCRATCH_DEV_POOL is unset and SCRATCH_DEV is unset
. no change
Signed-off-by: Anand Jain <Anand.Jain@oracle.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
snapshot data integrity test-case needs filesystem with random data.
Signed-off-by: Anand Jain <Anand.Jain@oracle.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
A clean checkout of xfstests followed by a build resulted in a long list
of untracked files. The current .gitignore ignores most binaries, but
the "dmapi" subdir was missed as were some binaries from the "src"
subdir.
Also ".libs" and ".ltdep" appear under a "dmapi" subdir, not just under
the top-level "libs" directory, so ignore those regardless of the
directory they are in.
Signed-off-by: Bill Kendall <wkendall@sgi.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Add the release script used in the other XFS user space packages.
The version is set to 1.1.0, to differentiate it from the 1.0.0
version that was recorded in the VERSION file.
Signed-off-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Before punching a hole in a file, TRIM_OFF_LEN() calls
TRIM_OFF_LEN() in order to make sure the offset and size
used are in a reasonable range. But currently the range
it's limited to is maxfilelen, which allows the offset
(and therefore offset + len) to be beyond EOF.
Later, do_punch_hole() ignores any request that starts beyond
EOF, so we might as well limit requests to the file size.
It appears that a hole punch request that starts within a
file but whose length extends beyond it is treated simply
as a hole punch up to EOF. So there's no harm in limiting
the end of a hole punch request to the file size either.
Therefore, use TRIM_OFF_LEN() to put both the the offset
and length of a request within the file size for hole
punch requests.
Signed-off-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
A recent commit added a TRIM_OFF_LEN() macro in "ltp/fsx.c":
5843147e xfstests: fsx fallocate support is b0rked
A later commit fixed a problem with that macro:
c47d7a51 xfstests: fix modulo-by-zero error in fsx
There is an extra flag parameter in that macro that I didn't like
in either version. When looking at it the second time around I
concluded that there was no need for the flag after all.
Going back to the first commit, the code that TRIM_OFF_LEN()
replaced had one of two forms:
- For OP_READ and OP_MAP_READ:
if (file_size)
offset %= file_size;
else
offset = 0;
if (offset + size > file_size)
size = file_size - offset;
- For all other cases (except OP_TRUNCATE):
offset %= maxfilelen;
if (offset + size > maxfilelen)
size = maxfilelen - offset;
There's no harm in ensuring maxfilelen is non-zero (and doing so
is safer than what's done above). So both of the above can be
generalized this way:
if (SIZE_LIMIT)
offset %= SIZE_LIMIT;
else
offset = 0;
if (offset + size > SIZE_LIMIT)
size = SIZE_LIMIT - offset;
In other words, there is no need for the extra flag in the macro.
The following patch just does away with it. It uses the value of
the "size" parameter directly in avoiding a divide-by-zero, and in
the process avoids referencing the global "file_size" within the
macro expansion.
Signed-off-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
xfs_io uses the filesystem block size as the default write buffer
size. 165 does not filter the ops counts out of the golden output,
and hnce causes failures because the ops count doesn't match for a
given sized write. Fix this by changing the filter to the generic
xfs_io no-numbers filter.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
The tests in 091 are entirely generic and pass e.g. on ext4 and jfs.
btrfs fails it, but that looks a like a btrfs-specific issue to me.
Also use _supported_os properly instead of erroring out manually on
IRIX.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
This effectively reverts
xfstests: add mapped write fsx operations to 091
and adds a new test case for it. It tests something slightly
different, and regressions in existing tests due to new features
are pretty nasty in a test suite.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
- filter out xfs_alloctype_t, this was an internal emum that got removed
- filter out xfs_bmbt_rec_32_t, this is a variant of the xfs_bmbt_rec_t
that had almost no users and was removed
- filter out xfs_dinode_core_t, the separate dinode core is gone, and just
checking the size of the full dinode is enough
- accept xfs_bmbt_rec_t as the new canonical name for xfs_bmbt_rec_64_t,
and replace the old name with the new one in the output stream.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
The xfs_bmap output in the golden image is filesystem block size
dependent. Make all writes 64k to ensure that the allocation/hole
pattern is consistent across all supported filesystem block sizes.
Also, use the SCRATCH_DEV instead of the TEST_DEV so that we test
according to MKFS_OPTIONS rather than test on whatever setup the
TEST_DEV was created with.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Use the scratch device for test 225 so that both custom mkfs and
mount options impact the test (e.g. filesystem block size). This
exposes test failures when using 512 byte block sizes, which
currently not tested unless the test device is specifically created
with a 512 byte block size.
Also clean up the file names to include the test number, and don't
remove the test files after the test has finished so that it leaves
behind a corpse that can be dissected when the test fails.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Make test 259 a bit more readable by using the new _math() function.
Signed-off-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Rather than testing for an exact timestamp, which could vary
due to rounding, just check that it is not positive,
which is the failure case we're looking for.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
This test checks the project quota values reported by the quota "df"
and "report" subcommands to ensure they match what they should be.
There was a bug (fixed by xfsprogs commit 7cb2d41b) where the values
reported were double what they should have been.
Signed-off-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
There is code in libxcmd that sets up a table of mount points and
directories that may be subject to quota enforcement. If any entry
in the mount table (/proc/self/mounts) is inaccessible or has any
other problems, libxcmd exits.
We have encountered mtab entries that appear to be artifacts from
autoumount that, when parsed for getmntent(), return paths in the
mnt_fsname field that do not exist. Such entries tend to have the
text " (deleted)" appended to a legitimate pathname (although the
space character is expanded to \040, as documented in getmntent(3)).
The xfs_quota command supports the ability to specify an alternate
mount table file, so this test makes use of that feature to exercise
the problem. The test simply uses xfs_quota to print the current
set of paths, providing an alternate mount table file. First it
does so with a copy of the current mount table (which is assumed
OK), then an extra bogus entry (very much like what has been seen
in the wild) is appended to the mount table, and runs the xfs_quota
command again.
It does this with no mount options, as well as with user, group, and
project quota options enabled. (Given the current state of the code
however, only one of these is required.)
Signed-off-by: Alex Elder <aelder@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
This test suppose to validate that file systems are using the fitrim
arguments right. It checks that the fstrim returns EINVAl in case that
the start of the range is beyond the end of the file system, and also
that the fstrim works without an error if the length of the range is
bigger than the file system (it should be truncated to the file system
length automatically within the fitrim implementation).
This test should also catch common problem with overflow of start+len.
Some file systems (ext4,xfs) had overflow problems in the past so there
is a specific test for it (for ext4 and xfs) as well as generic test for
other file systems, but it would be nice if other fs can add their
specific checks if this problem does apply to them as well.
[Added call to _require_math. -Alex]
Signed-off-by: Lukas Czerner <lczerner@redhat.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
Sometimes using bash $(()) math might not be enough due to some
limitation (big numbers), so add helper using 'bc' program. For
now the results are only in perfect numbers (as in bash) since this is
all I need for now.
This commit also adds _require_math() helper which should be called by
every test which uses _math() since it requires "bc" to be installed on
the system.
Signed-off-by: Lukas Czerner <lczerner@redhat.com>
Signed-off-by: Alex Elder <aelder@sgi.com>
Move the assignment of testfile after the sourcing of the common.* files to
make sure TEST_DIR is already defined - without this we end up creating
the file on the root filesystem, which may not support large enough files.
Also add a sync after the mkfs.xfs invocation, as losetup -d might fail
the loop device deletion with -EBUSY otherwise.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>