The xfs_db output is different for v5 filesystem metadata, and so
the test fails due to golden image mismatches rather than an actual
test failure. Improve the filter to hide the differences between the
metadata format outputs.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
xfs/299 has failed for me for a long time. In fact, looking at my
logs it has never passed on any of my test machines. IOWs, the
test that was committed was fundamentally broken.
The reason is that it tests project quotas before it tests user or
group quotas and so creates a bunch of files that are owned by root
or privileged users. It think tries to manipulate them as a user,
and, unsurprisingly, it fails to do so. This then causes the test to
throw an error.
The reason it has always failed is the error that is thrown
hardcodes a uid/gid into an error message. This uid/gid is what
causes the golden output mismatch (nobody is 65534 on my machines,
not 99):
*** push past the hard block limit (expect EDQUOT)
[ROOT] 0 0 0 00 [--------] 12 0 0 00 [--------] 0 0 0 00 [--------]
[NAME] =OK= 100 500 00 [--------] 7 4 10 00 [7 days] 0 0 0 00 [--------]
- URK 99: 0 is out of range! [425,500]
+ URK 65534: 0 is out of range! [425,500]
It wasn't until I looked at the xfs/299.full file when trying to
understand why the error was being thrown and whether it shoul dhave
been in the golden output in the first place that I saw the real
problem. That is, All the user/group quota modifications were
failing because of not having permissions to write the files left
behind by the quota test, and that user and group quotas were not
being tested at all by the test.
So, firstly $SCRATCH_MNT needs to be world writeable, and secondly
each test needs to remove the files it created during the test so
they don't impact on furture test iterations.
This then exercises the user and group quotas appropriately, and so
the golden output changes completely to reflect that changes under
user quotas are actually being accounted to the correct user.
Further, the error message that I originally saw errors on goes
away, because everything is now accounted correctly.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Ever since commit 7e2a19504 ("ls -l reports different file size
depending on platform and user.") xfs/066 has been running stat on
the dump/restore directory instead of the large file that the test
is checking can be dumped and restored correctly. IOWs, it's not
been checking the correct thing for almost 10 years.
This test fails on CRC enabled filesystems because the shortform
directory entry size is different (an extra byte for the filetype
filed), and this is where tracking down the failure has lead me.
Fix this by using the correct target file, and improve it by dumping
an md5sum of the source and target files to ensure they contain the
same data.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
This test never passed, so the golden output was never properly
verified as correct. Now that the bug is fixed, fix the golden
output to match the actual test output.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
The first test may start with the file from the previous test, and
that is in an unknown state. Hence always remove the test file
before the first test so that it doesn't have extents inside the
test range as it is supposed to be testing into a hole.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Test 17 of _test_generic_punch uses the filesystem block size to do
a sub-single block punch. The result of this is a files of
different sizes and md5sums when the filesystem block size changes.
However the only difference in file contents if the length of the
file - the zeroed region is always in the same place. Hence we can
use hexdump rather than md5sum to check the output remains
consistent and the hole remains in the correct place despite the
changing block sizes.
Fix up all the golden output for all the tests that use this
function, too.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
xfs/291 tries to fill the filesystem almost full, so if the log size
changes with mkfs defaults then it's free space calculations are not
longer valid and so it throws lots of ENOSPC errors during a run.
This is not fatal for this test, but it does increase the runtime of
it and fill the 291.full file with unnecessary errors.
The number of frag files it creates is also too many for a 512 byte
inode filesystem (by about 900) so reduce the number of inodes
initially created so the test works ofr 512 byte inodes. With 512
byte inodes, the free space histogram looks like this after the frag
phase:
from to extents blocks pct
1 1 10730 10730 100.00
And for 256 byte inodes:
from to extents blocks pct
1 1 12388 12388 100.00
So these changes do not affect the intended operation of the test.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
This test relies of the contents of the sb_features2 field being
known. Make sure ot clear allt eh MKFS_OPTIONS and ensure that we
direct mkfs to create only the simplest of featuresets to test this
functionality.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
In changing the default log sizes in mkfs, the freespace
calculations in xfs/104 are no longer valid and so it fails with
ENOSPC before running any of the growfs tests. Make the test use a
fixed log size of 5MB so that freespace calculations remain valid
and the test passes regardless of whether we have a new or old mkfs
binary.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Rich Johnston <rjohnston@sgi.com>
When running xfstests with an external log, the metadump tests fail
with extra output like:
+filesystem is marked as having an external log; specify logdev on the mount command line.
+xfs_metadump: cannot read superblock for ag 0
Add a _scratch_metadump() function to handle different logdev
configurations automatically for metadump.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Otherwise we end up with an ever-growing file for every test that is
run and that makes it hard to isolate failures.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Rich Johnston <rjohnston@sgi.com>
This has found several issues with recovery on CRC based
filesystems. It is based on a test case for a dir3 assert failure
provided by Michael L Semon.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Rich Johnston <rjohnston@sgi.com>
CRCs always enabled 32 bit project inodes and attr2 formats, hence
they cannot be turned off. Add new require rules for the tests that
require attr and 16 bit project IDs so these tests are avoided on
CRC enabled filesystems.
Also, add a xfs_db write check so that we can avoid tests that are
dependent on xfs_db modifying filesystem structures as they will
fail on CRC enabled filessystems right now. This is just temporary
until full write xfs_db support is available.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Rich Johnston <rjohnston@sgi.com>
I got burned on a mishmash system with /usr/sbin/mkfs but
/sbin/mkfs.xfs - or was it the other way around...
Anyway, in these tests, there's no need for the concatenation
to create "mkfs.xfs" - just use MKFS_XFS_PROG.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Introduce xfs/305 to verify that we can turn group/project quotas
off while user quotas is on and fsstress is running at the same time.
Signed-off-by: Jie Liu <jeff.liu@oracle.com>
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Introduce xfs/304 to verify that we can turn group/project quotas
off while user quotas is on.
Signed-off-by: Jie Liu <jeff.liu@oracle.com>
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Refactor xfs/299 to make use of those two crc related pre-checkup
routines, and remove the super block number from the golden output
file as it does not make sense IMO. Also, filter out *EXPERIMENTAL*
string from mkfs.xfs output as those contents would be removed once
crc feature becomes stable.
Signed-off-by: Jie Liu <jeff.liu@oracle.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Introduce a new test to verify xfs_quota administrator commands can
deal with invalid XFS mount path properly without NULL pointer
dereference issue.
Signed-off-by: Jie Liu <jeff.liu@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
For historical reasons beyond my knowledge xfstests tries to abuse the
scratch device as test device for nfs and udf. Because not all test
have inherited the right usage of the _setup_testdir and _cleanup_testdir
helpers this leads to lots of unessecary test failures.
Remove the special casing, which gets nfs down to a minimal number of
failures.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Eric Sandeen <sandeen@redhat.com>
Sugned-off-by: Dave Chinner <david@fromorbit.com>
xfs/206 displays the output for mkfs.xfs, xfs_growfs and xfs_info.
Change the filtering to hide the new output for the field type
feature.
While cleaning up the ftype output, also clean up the projid32bit
output in xfs_growfs and xfs_info.
Signed-off-by: Mark Tinguely <tinguely@sgi.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Rich Johnston <rjohnston@sgi.com>
We just want to remove "block device" in _filter_ro_mount(), so add
"mount:" back.
Add one more call of _filter_ro_mount() in xfs/200 to match 200.out.
Signed-off-by: Eryu Guan <eguan@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Multi-stream xfsdump/xfsrestore of more than partialmax wholly-sparse
files segfaults with the following warning:
"partial_reg: Out of records. Extend attrs applied early."
Add a test that dumps and restores partialmax + 1 wholly-sparse files.
Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Reviewed-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Verify extended attributes are not lost after multi-stream
xfsdump/xfsrestore of wholly-sparse files. The restore succeeds,
however the extended attributes for that file are lost.
Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Reviewed-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: Rich Johnston <rjohnston@sgi.com>
Test xfs/016 fails to run due to invalid mkfs options. The log size
is reported as too small according to the minimum log size
calculation:
log size 512 blocks too small, minimum size is 853 blocks
Update log_size to the currently specified minimum.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Rich Johnston <rjohnston@sgi.com>