btrfs/098: work on non-4k block sized filesystems

This commit makes use of the new _filter_xfs_io_blocks_modified filtering
function to print information in terms of file blocks rather than file
offset.

Signed-off-by: Chandan Rajendra <chandan@linux.vnet.ibm.com>
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
This commit is contained in:
Chandan Rajendra
2015-12-21 18:01:46 +11:00
committed by Dave Chinner
parent b7318fa070
commit 2099e00681
2 changed files with 59 additions and 37 deletions
+38 -31
View File
@@ -58,43 +58,50 @@ _scratch_mkfs >>$seqres.full 2>&1
_init_flakey
_mount_flakey
# Create our test file with a single 100K extent starting at file offset 800K.
# We fsync the file here to make the fsync log tree gets a single csum item that
# covers the whole 100K extent, which causes the second fsync, done after the
# cloning operation below, to not leave in the log tree two csum items covering
# two sub-ranges ([0, 20K[ and [20K, 100K[)) of our extent.
$XFS_IO_PROG -f -c "pwrite -S 0xaa 800K 100K" \
-c "fsync" \
$SCRATCH_MNT/foo | _filter_xfs_io
BLOCK_SIZE=$(get_block_size $SCRATCH_MNT)
# Now clone part of our extent into file offset 400K. This adds a file extent
# item to our inode's metadata that points to the 100K extent we created before,
# using a data offset of 20K and a data length of 20K, so that it refers to
# the sub-range [20K, 40K[ of our original extent.
$CLONER_PROG -s $((800 * 1024 + 20 * 1024)) -d $((400 * 1024)) \
-l $((20 * 1024)) $SCRATCH_MNT/foo $SCRATCH_MNT/foo
# Create our test file with a single 25 block extent starting at file offset
# mapped by 200th block We fsync the file here to make the fsync log tree get a
# single csum item that covers the whole 25 block extent, which causes the
# second fsync, done after the cloning operation below, to not leave in the log
# tree two csum items covering two block sub-ranges ([0, 5[ and [5, 25[)) of our
# extent.
$XFS_IO_PROG -f -c "pwrite -S 0xaa $((200 * $BLOCK_SIZE)) $((25 * $BLOCK_SIZE))" \
-c "fsync" \
$SCRATCH_MNT/foo | _filter_xfs_io_blocks_modified
# Now clone part of our extent into file offset mapped by 100th block. This adds
# a file extent item to our inode's metadata that points to the 25 block extent
# we created before, using a data offset of 5 blocks and a data length of 5
# blocks, so that it refers to the block sub-range [5, 10[ of our original
# extent.
$CLONER_PROG -s $(((200 * $BLOCK_SIZE) + (5 * $BLOCK_SIZE))) \
-d $((100 * $BLOCK_SIZE)) -l $((5 * $BLOCK_SIZE)) \
$SCRATCH_MNT/foo $SCRATCH_MNT/foo
# Now fsync our file to make sure the extent cloning is durably persisted. This
# fsync will not add a second csum item to the log tree containing the checksums
# for the blocks in the sub-range [20K, 40K[ of our extent, because there was
# for the blocks in the block sub-range [5, 10[ of our extent, because there was
# already a csum item in the log tree covering the whole extent, added by the
# first fsync we did before.
$XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foo
echo "File digest before power failure:"
md5sum $SCRATCH_MNT/foo | _filter_scratch
echo "File contents before power failure:"
od -t x1 $SCRATCH_MNT/foo | _filter_od
# The fsync log replay first processes the file extent item corresponding to the
# file offset 400K (the one which refers to the [20K, 40K[ sub-range of our 100K
# extent) and then processes the file extent item for file offset 800K. It used
# to happen that when processing the later, it erroneously left in the csum tree
# 2 csum items that overlapped each other, 1 for the sub-range [20K, 40K[ and 1
# for the whole range of our extent. This introduced a problem where subsequent
# lookups for the checksums of blocks within the range [40K, 100K[ of our extent
# would not find anything because lookups in the csum tree ended up looking only
# at the smaller csum item, the one covering the subrange [20K, 40K[. This made
# read requests assume an expected checksum with a value of 0 for those blocks,
# which caused checksum verification failure when the read operations finished.
# file offset mapped by 100th block (the one which refers to the [5, 10[ block
# sub-range of our 25 block extent) and then processes the file extent item for
# file offset mapped by 200th block. It used to happen that when processing the
# later, it erroneously left in the csum tree 2 csum items that overlapped each
# other, 1 for the block sub-range [5, 10[ and 1 for the whole range of our
# extent. This introduced a problem where subsequent lookups for the checksums
# of blocks within the block range [10, 25[ of our extent would not find
# anything because lookups in the csum tree ended up looking only at the smaller
# csum item, the one covering the block subrange [5, 10[. This made read
# requests assume an expected checksum with a value of 0 for those blocks, which
# caused checksum verification failure when the read operations finished.
# However those checksum failure did not result in read requests returning an
# error to user space (like -EIO for e.g.) because the expected checksum value
# had the special value 0, and in that case btrfs set all bytes of the
@@ -106,10 +113,10 @@ md5sum $SCRATCH_MNT/foo | _filter_scratch
#
_flakey_drop_and_remount
echo "File digest after log replay:"
# Must match the same digest he had after cloning the extent and before the
# power failure happened.
md5sum $SCRATCH_MNT/foo | _filter_scratch
echo "File contents after log replay:"
# Must match the file contents we had after cloning the extent and before
# the power failure happened.
od -t x1 $SCRATCH_MNT/foo | _filter_od
_unmount_flakey
+21 -6
View File
@@ -1,7 +1,22 @@
QA output created by 098
wrote 102400/102400 bytes at offset 819200
XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
File digest before power failure:
39b386375971248740ed8651d5a2ed9f SCRATCH_MNT/foo
File digest after log replay:
39b386375971248740ed8651d5a2ed9f SCRATCH_MNT/foo
Blocks modified: [200 - 224]
File contents before power failure:
0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
144 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
*
151 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
310 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
*
341
File contents after log replay:
0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
144 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
*
151 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
310 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
*
341