mirror of
https://github.com/linux-apfs/apfstests.git
synced 2026-05-01 15:01:44 -07:00
2ad5167475
This is a simple script to compare failures across runs.
Given files containing stdout from several runs, each of which contains
a Failures: line, it will print a table of all failures for each run.
Test subdir names are abbreviated for compactness, i.e. generic->g.
For 7 results files named test 1 through test 7:
Failures:
g/075 g/082 g/209 g/233 g/270 g/388 x/004 x/073 x/076
-----------------------------------------------------
g/082 g/233 x/004 x/073 test1
g/082 g/233 x/004 x/073 x/076 test2
g/082 x/004 x/073 x/076 test3
g/082 g/388 x/004 x/073 test4
g/082 g/270 x/004 x/073 test5
g/082 x/004 x/073 test6
g/075 g/082 g/209 g/233 x/004 x/073 test7
This lets us easily spot unique failures and outliers.
This could be enhanced to output CSV etc, but for now I think it's
helpful to visualize changes in failures across multiple runs.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Eryu Guan <eguan@redhat.com>
Signed-off-by: Eryu Guan <eguan@redhat.com>
283 lines
11 KiB
Plaintext
283 lines
11 KiB
Plaintext
_______________________
|
|
BUILDING THE FSQA SUITE
|
|
_______________________
|
|
|
|
Building Linux:
|
|
- cd into the xfstests directory
|
|
- install prerequisite packages
|
|
For example, for Ubuntu:
|
|
sudo apt-get install xfslibs-dev uuid-dev libtool-bin \
|
|
e2fsprogs automake gcc libuuid1 quota attr libattr1-dev make \
|
|
libacl1-dev libaio-dev xfsprogs libgdbm-dev gawk fio dbench \
|
|
uuid-runtime
|
|
For Fedora, RHEL, or CentOS:
|
|
yum install acl attr automake bc dbench dump e2fsprogs fio \
|
|
gawk gcc indent libtool lvm2 make psmisc quota sed \
|
|
xfsdump xfsprogs \
|
|
libacl-devel libattr-devel libaio-devel libuuid-devel \
|
|
openssl-devel xfsprogs-devel btrfs-progs-devel
|
|
(Older distributions may require xfsprogs-qa-devel as well.)
|
|
(Note that for RHEL and CentOS, you may need the EPEL repo.)
|
|
- run make
|
|
- run make install
|
|
- create fsgqa test user ("sudo useradd fsgqa")
|
|
- create 123456-fsgqa test user ("sudo useradd 123456-fsgqa")
|
|
|
|
Building IRIX:
|
|
- cd into the xfstests directory
|
|
- set the ROOT and TOOLROOT env variables for IRIX appropriately
|
|
- run ./make_irix
|
|
|
|
______________________
|
|
USING THE FSQA SUITE
|
|
______________________
|
|
|
|
Preparing system for tests (IRIX and Linux):
|
|
|
|
- compile XFS into your kernel or load XFS modules
|
|
- install administrative tools specific to the filesystem you wish to test
|
|
- If you wish to run the udf components of the suite install
|
|
mkfs_udf and udf_db for IRIX and mkudffs for Linux. Also download and
|
|
build the Philips UDF Verification Software from
|
|
http://www.extra.research.philips.com/udf/, then copy the udf_test
|
|
binary to xfstests/src/. If you wish to disable UDF verification test
|
|
set the environment variable DISABLE_UDF_TEST to 1.
|
|
|
|
|
|
- create one or two partitions to use for testing
|
|
- one TEST partition
|
|
- format as XFS, mount & optionally populate with
|
|
NON-IMPORTANT stuff
|
|
- one SCRATCH partition (optional)
|
|
- leave empty and expect this partition to be clobbered
|
|
by some tests. If this is not provided, many tests will
|
|
not be run.
|
|
(SCRATCH and TEST must be two DIFFERENT partitions)
|
|
OR
|
|
- for btrfs only: some btrfs test cases will need 3 or more independent
|
|
SCRATCH disks which should be set using SCRATCH_DEV_POOL (for eg:
|
|
SCRATCH_DEV_POOL="/dev/sda /dev/sdb /dev/sdc") with which
|
|
SCRATCH_DEV should be unused by the tester, and for the legacy
|
|
support SCRATCH_DEV will be set to the first disk of the
|
|
SCRATCH_DEV_POOL by xfstests script.
|
|
|
|
- setup your environment
|
|
Quick start:
|
|
- copy local.config.example to local.config and edit as needed
|
|
Or:
|
|
- setenv TEST_DEV "device containing TEST PARTITION"
|
|
- setenv TEST_DIR "mount point of TEST PARTITION"
|
|
- optionally:
|
|
- setenv SCRATCH_DEV "device containing SCRATCH PARTITION" OR
|
|
(btrfs only) setenv SCRATCH_DEV_POOL "to 3 or more SCRATCH disks for
|
|
testing btrfs raid concepts"
|
|
- setenv SCRATCH_MNT "mount point for SCRATCH PARTITION"
|
|
- setenv TAPE_DEV "tape device for testing xfsdump"
|
|
- setenv RMT_TAPE_DEV "remote tape device for testing xfsdump"
|
|
- setenv RMT_IRIXTAPE_DEV "remote IRIX tape device for testing xfsdump"
|
|
- setenv SCRATCH_LOGDEV "device for scratch-fs external log"
|
|
- setenv SCRATCH_RTDEV "device for scratch-fs realtime data"
|
|
- setenv TEST_LOGDEV "device for test-fs external log"
|
|
- setenv TEST_RTDEV "device for test-fs realtime data"
|
|
- if TEST_LOGDEV and/or TEST_RTDEV, these will always be used.
|
|
- if SCRATCH_LOGDEV and/or SCRATCH_RTDEV, the USE_EXTERNAL
|
|
environment variable set to "yes" will enable their use.
|
|
- setenv DIFF_LENGTH "number of diff lines to print from a failed test",
|
|
by default 10, set to 0 to print the full diff
|
|
- setenv FSTYP "the filesystem you want to test", the filesystem
|
|
type is devised from the TEST_DEV device, but you may want to
|
|
override it; if unset, the default is 'xfs'
|
|
- setenv FSSTRESS_AVOID and/or FSX_AVOID, which contain options
|
|
added to the end of fsstresss and fsx invocations, respectively,
|
|
in case you wish to exclude certain operational modes from these
|
|
tests.
|
|
- set TEST_XFS_REPAIR_REBUILD=1 to have _check_xfs_filesystem
|
|
run xfs_repair -n to check the filesystem; xfs_repair to rebuild
|
|
metadata indexes; and xfs_repair -n (a third time) to check the
|
|
results of the rebuilding.
|
|
- set TEST_XFS_SCRUB=1 to have _check_xfs_filesystem run
|
|
xfs_scrub -vd to scrub the filesystem metadata online before
|
|
unmounting to run the offline check.
|
|
|
|
- or add a case to the switch in common/config assigning
|
|
these variables based on the hostname of your test
|
|
machine
|
|
- or add these variables to a file called local.config and keep that
|
|
file in your workarea.
|
|
|
|
- if testing xfsdump, make sure the tape devices have a
|
|
tape which can be overwritten.
|
|
|
|
- make sure $TEST_DEV is a mounted XFS partition
|
|
- make sure that $SCRATCH_DEV or $SCRATCH_DEV_POOL contains nothing useful
|
|
|
|
Running tests:
|
|
|
|
- cd xfstests
|
|
- By default the tests suite will run xfs tests:
|
|
- ./check '*/001' '*/002' '*/003'
|
|
- ./check '*/06?'
|
|
- You can explicitly specify NFS/CIFS/UDF, otherwise the filesystem type will
|
|
be autodetected from $TEST_DEV:
|
|
./check -nfs [test(s)]
|
|
- Groups of tests maybe ran by: ./check -g [group(s)]
|
|
See the 'group' file for details on groups
|
|
- for udf tests: ./check -udf [test(s)]
|
|
Running all the udf tests: ./check -udf -g udf
|
|
- for running nfs tests: ./check -nfs [test(s)]
|
|
- for running cifs/smb3 tests: ./check -cifs [test(s)]
|
|
- To randomize test order: ./check -r [test(s)]
|
|
|
|
|
|
The check script tests the return value of each script, and
|
|
compares the output against the expected output. If the output
|
|
is not as expected, a diff will be output and an .out.bad file
|
|
will be produced for the failing test.
|
|
|
|
Unexpected console messages, crashes and hangs may be considered
|
|
to be failures but are not necessarily detected by the QA system.
|
|
|
|
__________________________
|
|
ADDING TO THE FSQA SUITE
|
|
__________________________
|
|
|
|
|
|
Creating new tests scripts:
|
|
|
|
Use the "new" script.
|
|
|
|
Test script environment:
|
|
|
|
When developing a new test script keep the following things in
|
|
mind. All of the environment variables and shell procedures are
|
|
available to the script once the "common/rc" file has been
|
|
sourced.
|
|
|
|
1. The tests are run from an arbitrary directory. If you want to
|
|
do operations on an XFS filesystem (good idea, eh?), then do
|
|
one of the following:
|
|
|
|
(a) Create directories and files at will in the directory
|
|
$TEST_DIR ... this is within an XFS filesystem and world
|
|
writeable. You should cleanup when your test is done,
|
|
e.g. use a _cleanup shell procedure in the trap ... see
|
|
001 for an example. If you need to know, the $TEST_DIR
|
|
directory is within the filesystem on the block device
|
|
$TEST_DEV.
|
|
|
|
(b) mkfs a new XFS filesystem on $SCRATCH_DEV, and mount this
|
|
on $SCRATCH_MNT. Call the the _require_scratch function
|
|
on startup if you require use of the scratch partition.
|
|
_require_scratch does some checks on $SCRATCH_DEV &
|
|
$SCRATCH_MNT and makes sure they're unmounted. You should
|
|
cleanup when your test is done, and in particular unmount
|
|
$SCRATCH_MNT.
|
|
Tests can make use of $SCRATCH_LOGDEV and $SCRATCH_RTDEV
|
|
for testing external log and realtime volumes - however,
|
|
these tests need to simply "pass" (e.g. cat $seq.out; exit
|
|
- or default to an internal log) in the common case where
|
|
these variables are not set.
|
|
|
|
2. You can safely create temporary files that are not part of the
|
|
filesystem tests (e.g. to catch output, prepare lists of things
|
|
to do, etc.) in files named $tmp.<anything>. The standard test
|
|
script framework created by "new" will initialize $tmp and
|
|
cleanup on exit.
|
|
|
|
3. By default, tests are run as the same uid as the person
|
|
executing the control script "check" that runs the test scripts.
|
|
|
|
4. Some other useful shell procedures:
|
|
|
|
_get_fqdn - echo the host's fully qualified
|
|
domain name
|
|
|
|
_get_pids_by_name - one argument is a process name, and
|
|
return all of the matching pids on
|
|
standard output
|
|
|
|
_within_tolerance - fancy numerical "close enough is good
|
|
enough" filter for deterministic
|
|
output ... see comments in
|
|
common/filter for an explanation
|
|
|
|
_filter_date - turn ctime(3) format dates into the
|
|
string DATE for deterministic
|
|
output
|
|
|
|
_cat_passwd, - dump the content of the password
|
|
_cat_group or group file (both the local file
|
|
and the content of the NIS database
|
|
if it is likely to be present)
|
|
|
|
5. General recommendations, usage conventions, etc.:
|
|
- When the content of the password or group file is
|
|
required, get it using the _cat_passwd and _cat_group
|
|
functions, to ensure NIS information is included if NIS
|
|
is active.
|
|
- When calling getfacl in a test, pass the "-n" argument so
|
|
that numeric rather than symbolic identifiers are used in
|
|
the output.
|
|
- When creating a new test, it is possible to enter a custom name
|
|
for the file. Filenames are in form NNN-custom-name, where NNN
|
|
is automatically added by the ./new script as an unique ID,
|
|
and "custom-name" is the optional string entered into a prompt
|
|
in the ./new script. It can contain only alphanumeric characters
|
|
and dash. Note the "NNN-" part is added automatically.
|
|
|
|
Verified output:
|
|
|
|
Each test script has a name, e.g. 007, and an associated
|
|
verified output, e.g. 007.out.
|
|
|
|
It is important that the verified output is deterministic, and
|
|
part of the job of the test script is to filter the output to
|
|
make this so. Examples of the sort of things that need filtering:
|
|
|
|
- dates
|
|
- pids
|
|
- hostnames
|
|
- filesystem names
|
|
- timezones
|
|
- variable directory contents
|
|
- imprecise numbers, especially sizes and times
|
|
|
|
Pass/failure:
|
|
|
|
The script "check" may be used to run one or more tests.
|
|
|
|
Test number $seq is deemed to "pass" when:
|
|
(a) no "core" file is created,
|
|
(b) the file $seq.notrun is not created,
|
|
(c) the exit status is 0, and
|
|
(d) the output matches the verified output.
|
|
|
|
In the "not run" case (b), the $seq.notrun file should contain a
|
|
short one-line summary of why the test was not run. The standard
|
|
output is not checked, so this can be used for a more verbose
|
|
explanation and to provide feedback when the QA test is run
|
|
interactively.
|
|
|
|
|
|
To force a non-zero exit status use:
|
|
status=1
|
|
exit
|
|
|
|
Note that:
|
|
exit 1
|
|
won't have the desired effect because of the way the exit trap
|
|
works.
|
|
|
|
The recent pass/fail history is maintained in the file "check.log".
|
|
The elapsed time for the most recent pass for each test is kept
|
|
in "check.time".
|
|
|
|
The compare-failures script in tools/ may be used to compare failures
|
|
across multiple runs, given files containing stdout from those runs.
|
|
|
|
__________________
|
|
SUBMITTING PATCHES
|
|
__________________
|
|
|
|
Send patches to the fstests mailing list at fstests@vger.kernel.org.
|