xfs: test larger dump/restore to/from file

This test creates a large-ish directory structure using
fsstress, and does a dump/restore to make sure we dump
all the files.

Without the fix for the regression caused by:
c7cb51d xfs: fix error handling at xfs_inumbers

we will see failures like:

    -xfsrestore: 486 directories and 1590 entries processed
    +xfsrestore: 30 directories and 227 entries processed

as it fails to process all inodes.

I think that existing tests have a much smaller set of files,
and so don't trip the bug.

I don't do a file-by-file comparison here, because for some
reason the diff output gets garbled; this test only checks
that we've dumped & restored the correct number of files.

Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
This commit is contained in:
Eric Sandeen
2014-10-14 22:59:39 +11:00
committed by Dave Chinner
parent cf1438248c
commit 481c28f52f
4 changed files with 105 additions and 2 deletions
+7 -2
View File
@@ -298,15 +298,16 @@ _stable_fs()
# files,dirs,links,symlinks
#
# Pinched from test 013.
# Takes one argument, the number of ops to perform
#
_create_dumpdir_stress()
_create_dumpdir_stress_num()
{
echo "Creating directory system to dump using fsstress."
_count=$1
_wipe_fs
_param="-f link=10 -f creat=10 -f mkdir=10 -f truncate=5 -f symlink=10"
_count=240
rm -rf $dump_dir
if ! mkdir $dump_dir; then
echo " failed to mkdir $dump_dir"
@@ -331,6 +332,10 @@ _create_dumpdir_stress()
_stable_fs
}
_create_dumpdir_stress() {
_create_dumpdir_stress_num 240
}
_mk_fillconfig1()
{
cat <<End-of-File >$tmp.config