Files
apfstests/tests/generic/060.out
T
Anand Jain 3c48a2ca20 fstests: fix _test_generic_punch() to fit 64k extent
14 test cases use _test_generic_punch(), and they work well as long
as the ext4/xfs blocksize or btrfs sectorsize is below 4K.

In the system with 64K pagesize, as the blocksize can be upto 64K or the
sectorsize can be 64K so 13/14 test cases fail, because the
test-file-size (20k) and thus the extent boundary offsets aren't
big enough to fit the larger than 4k extent size.

Commit 2f194e4e82 (generic/009: don't run
for btrfs if PAGE_SIZE > 4096) tried to address this by calling the
not_run in generic/009.

And in the function _test_generic_punch() we use multiple=4 to address
the similar problem but its limited to the subcommand fcollapse.

Now to run these test cases successfully on systems with pagesize 64k,
this patch propose to increase the default multiple=1 to multiple=16.
With this we increase the test file size from 20k to 320k and thus it
encapsulates maximum extent size of 64k here. And we can drop the
multiple=4 which is just being done similar for the cases of fcollapse
subcommand only. And it appears to me there is no harm in increasing
the file size and offsets in general for all commands instead of just
fcollapse command.

This change is tested on ext4, xfs and btrfs on system with pagesize
4K and 64K.

With this patch, these 14 test cases runs fine on system with 64K
pagesize as well as pagesize 4K. However we may hit the same
limitation at some point when we want to validate the FSs with
pagesizes -gt 64K. And this patch does not address that part as of
now.

Signed-off-by: Anand Jain <anand.jain@oracle.com>
Tested-by: Chandan Rajendra <chandan@linux.vnet.ibm.com>
Signed-off-by: Eryu Guan <guaneryu@gmail.com>
2018-09-23 22:26:56 +08:00

79 lines
1.8 KiB
Plaintext

QA output created by 060
1. into a hole
72b5e7556a604b06e790401ecc7b5b2d
2. into allocated space
0: [0..127]: extent
1: [128..383]: hole
2: [384..895]: extent
85150f56d1f598daa2776771bbfb8347
3. into unwritten space
0: [0..127]: extent
1: [128..383]: hole
2: [384..895]: extent
72b5e7556a604b06e790401ecc7b5b2d
4. hole -> data
0: [0..511]: hole
1: [512..767]: extent
2: [768..895]: hole
3bbe716019739da9679d10dafbaf0cdf
5. hole -> unwritten
0: [0..511]: hole
1: [512..767]: extent
2: [768..895]: hole
72b5e7556a604b06e790401ecc7b5b2d
6. data -> hole
0: [0..127]: extent
1: [128..383]: hole
2: [384..511]: extent
3: [512..895]: hole
097cbf706ff92b327228097f81e71f9e
7. data -> unwritten
0: [0..127]: extent
1: [128..383]: hole
2: [384..767]: extent
3: [768..895]: hole
097cbf706ff92b327228097f81e71f9e
8. unwritten -> hole
0: [0..127]: extent
1: [128..383]: hole
2: [384..511]: extent
3: [512..895]: hole
72b5e7556a604b06e790401ecc7b5b2d
9. unwritten -> data
0: [0..127]: extent
1: [128..383]: hole
2: [384..767]: extent
3: [768..895]: hole
3bbe716019739da9679d10dafbaf0cdf
10. hole -> data -> hole
0: [0..639]: hole
1: [640..767]: extent
2: [768..1023]: hole
25d5a6b0e585c6786bad8e89772bec43
11. data -> hole -> data
0: [0..127]: extent
1: [128..511]: hole
2: [512..639]: extent
3: [640..767]: hole
4: [768..1023]: extent
59318afefe51e77755ae7d3ef45cd067
12. unwritten -> data -> unwritten
0: [0..127]: extent
1: [128..511]: hole
2: [512..1023]: extent
25d5a6b0e585c6786bad8e89772bec43
13. data -> unwritten -> data
0: [0..127]: extent
1: [128..511]: hole
2: [512..1023]: extent
14f9fdcf7f1920275e6de2b342441a24
14. data -> hole @ EOF
0: [0..383]: extent
1: [384..639]: hole
2: [640..895]: extent
222a22b39253359b4afd167b9f150530
15. data -> hole @ 0
0: [0..255]: hole
1: [256..895]: extent
3f701b5bae2bec1d49dd68b17fa334e5