In btrfs-progs v4.10 we had a behaviour change where starting a balance
operation without any filters results in a delay of 10 seconds and a
warning is printed to stdout that warns that a full balance is about to
be made and that it can be a slow operation. The new flag '--full-balance'
was added in that release to avoid the 10 seconds delay and the warning
message.
Our existing helper _run_btrfs_balance_start() uses that new balance flag
if we are running a btrfs-progs version that has it, to avoid that 10
seconds wait.
Make all existing btrfs tests that trigger balance operations use the
_run_btrfs_balance_start() helper, so that we avoid wasting time and
speed up some of the tests. In particular test btrfs/014 is now about 10x
faster and tests btrfs/060 to btrfs/064 3 to 5 times faster (depending
on the fsstress random load).
Besides speeding up many tests that do balance operations it also fixes
functional problems:
1) Since btrfs-progs v4.10 the test case btrfs/014 got broken, because
its purpose is to run balance and snapshot creation in parallel,
and that wasn't happening anymore because all snapshots were being
created during the 10 seconds delay of the first balance operation,
so balance and snapshot creation was being serialized instead of
running in parallel.
Fixing this test to avoid the 10 seconds delay immediately
exposes a regression that went into kernel 5.7-rc1 which is fixed
by the following commit
aec7db3b13a0 ("btrfs: fix setting last_trans for reloc roots")
2) Test cases btrfs/060 to btrfs/064 now spend much more time running
fsstress, balance and other operations in parallel, there's no
longer intervals of 10 seconds where balance is not running
concurrently with those other operations, making the tests a lot
more useful again.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Eryu Guan <guaneryu@gmail.com>
Signed-off-by: Eryu Guan <guaneryu@gmail.com>
Some btrfs test cases use btrfs module-reload to unregister devices
in the btrfs kernel. The problem with the module-reload approach is,
if test system contains btrfs as rootfs, then you can't run these
test cases.
Patches [1] introduced btrfs forget feature which can unregister
devices without the module-reload approach.
[1]
btrfs-progs: device scan: add new option to forget one or all scanned devices
btrfs: introduce new ioctl to unregister a btrfs device
And this patch makes relevant changes in the fstests to use this new
feature, when available.
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Reviewed-by: Eryu Guan <guaneryu@gmail.com>
Signed-off-by: Eryu Guan <guaneryu@gmail.com>
btrfs/125, btrfs/148, btrfs/157, and btrfs/158 test for raid56
behavior. We shouldn't run if the kernel doesn't have support for
them.
Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Eryu Guan <guaneryu@gmail.com>
We were sorting numerical values with the 'sort' tool without telling it
that we are sorting numbers, giving us unexpected ordering. So just pass
the '-n' option to the 'sort' tool.
Example:
$ echo -e "11\n9\n20" | sort
11
20
9
$ echo -e "11\n9\n20" | sort -n
9
11
20
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Eryu Guan <guaneryu@gmail.com>
Signed-off-by: Eryu Guan <guaneryu@gmail.com>
Add some helper functions to require that we can reload a given
module, and add a helper to actually do that. Refactor the existing
users to use the generics.
We need to hoist completely the behaviors of the old btrfs module
helper because we need to confirm before starting the test that we
actually can remove the module.
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Eryu Guan <eguan@redhat.com>
Signed-off-by: Eryu Guan <eguan@redhat.com>
The tests mount the second device in the device pool but never
unmount it, causing the next test to fail.
Example:
$ cat local.config
export TEST_DEV=/dev/sdb
export TEST_DIR=/home/fdmanana/btrfs-tests/dev
export SCRATCH_MNT="/home/fdmanana/btrfs-tests/scratch_1"
export SCRATCH_DEV_POOL="/dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg"
export FSTYP=btrfs
$ ./check btrfs/125 btrfs/126
FSTYP -- btrfs
PLATFORM -- Linux/x86_64 debian3 4.8.0-rc8-btrfs-next-35+
MKFS_OPTIONS -- /dev/sdc
MOUNT_OPTIONS -- /dev/sdc /home/fdmanana/btrfs-tests/scratch_1
btrfs/125 23s ... 22s
btrfs/126 1s ... - output mismatch (see /home/fdmanana/git/hub/xfstests/results//btrfs/126.out.bad)
--- tests/btrfs/126.out 2016-11-24 06:11:42.048372385 +0000
+++ /home/fdmanana/git/hub/xfstests/results//btrfs/126.out.bad 2016-11-24 06:16:35.987988895 +0000
@@ -1,2 +1,5 @@
QA output created by 126
-pwrite: Disk quota exceeded
+ERROR: /dev/sdc is mounted
+mount: /dev/sdc is already mounted or /home/fdmanana/btrfs-tests/scratch_1 busy
+ /dev/sdc is already mounted on /home/fdmanana/btrfs-tests/scratch_1
+/home/fdmanana/btrfs-tests/scratch_1/test_file: Disk quota exceeded
...
(Run 'diff -u tests/btrfs/126.out /home/fdmanana/git/hub/xfstests/results//btrfs/126.out.bad' to see the entire diff)
Ran: btrfs/125 btrfs/126
Failures: btrfs/126
Failed 1 of 2 tests
So just make sure those test unmount the device before they finish.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Eryu Guan <eguan@redhat.com>
Signed-off-by: Eryu Guan <eguan@redhat.com>
The test does the following:
Initialize a RAID5 with some data
Re-mount RAID5 degraded with _dev3_ missing and write data.
Save md5sum checkpoint1
Re-mount healthy RAID5
Let balance fix degraded blocks.
Save md5sum checkpoint2
Re-mount RAID1 degraded now with _dev1_ missing.
Save md5sum checkpoint3
Verify if all three md5sum matches
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Reviewed-by: Eryu Guan <eguan@redhat.com>
Signed-off-by: Eryu Guan <eguan@redhat.com>