Merge tag 'mm-nonmm-stable-2023-02-20-15-29' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull non-MM updates from Andrew Morton:
 "There is no particular theme here - mainly quick hits all over the
  tree.

  Most notable is a set of zlib changes from Mikhail Zaslonko which
  enhances and fixes zlib's use of S390 hardware support: 'lib/zlib: Set
  of s390 DFLTCC related patches for kernel zlib'"

* tag 'mm-nonmm-stable-2023-02-20-15-29' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (55 commits)
  Update CREDITS file entry for Jesper Juhl
  sparc: allow PM configs for sparc32 COMPILE_TEST
  hung_task: print message when hung_task_warnings gets down to zero.
  arch/Kconfig: fix indentation
  scripts/tags.sh: fix the Kconfig tags generation when using latest ctags
  nilfs2: prevent WARNING in nilfs_dat_commit_end()
  lib/zlib: remove redundation assignement of avail_in dfltcc_gdht()
  lib/Kconfig.debug: do not enable DEBUG_PREEMPT by default
  lib/zlib: DFLTCC always switch to software inflate for Z_PACKET_FLUSH option
  lib/zlib: DFLTCC support inflate with small window
  lib/zlib: Split deflate and inflate states for DFLTCC
  lib/zlib: DFLTCC not writing header bits when avail_out == 0
  lib/zlib: fix DFLTCC ignoring flush modes when avail_in == 0
  lib/zlib: fix DFLTCC not flushing EOBS when creating raw streams
  lib/zlib: implement switching between DFLTCC and software
  lib/zlib: adjust offset calculation for dfltcc_state
  nilfs2: replace WARN_ONs for invalid DAT metadata block requests
  scripts/spelling.txt: add "exsits" pattern and fix typo instances
  fs: gracefully handle ->get_block not mapping bh in __mpage_writepage
  cramfs: Kconfig: fix spelling & punctuation
  ...
This commit is contained in:
Linus Torvalds
2023-02-23 17:55:40 -08:00
66 changed files with 1778 additions and 297 deletions

View File

@@ -1852,11 +1852,11 @@ E: ajoshi@shell.unixbox.com
D: fbdev hacking
N: Jesper Juhl
E: jj@chaosbits.net
E: jesperjuhl76@gmail.com
D: Various fixes, cleanups and minor features all over the tree.
D: Wrote initial version of the hdaps driver (since passed on to others).
S: Lemnosvej 1, 3.tv
S: 2300 Copenhagen S.
S: Titangade 5G, 2.tv
S: 2200 Copenhagen N.
S: Denmark
N: Jozsef Kadlecsik

View File

@@ -453,9 +453,10 @@ this allows system administrators to override the
kexec_load_disabled
===================
A toggle indicating if the ``kexec_load`` syscall has been disabled.
This value defaults to 0 (false: ``kexec_load`` enabled), but can be
set to 1 (true: ``kexec_load`` disabled).
A toggle indicating if the syscalls ``kexec_load`` and
``kexec_file_load`` have been disabled.
This value defaults to 0 (false: ``kexec_*load`` enabled), but can be
set to 1 (true: ``kexec_*load`` disabled).
Once true, kexec can no longer be used, and the toggle cannot be set
back to false.
This allows a kexec image to be loaded before disabling the syscall,
@@ -463,6 +464,24 @@ allowing a system to set up (and later use) an image without it being
altered.
Generally used together with the `modules_disabled`_ sysctl.
kexec_load_limit_panic
======================
This parameter specifies a limit to the number of times the syscalls
``kexec_load`` and ``kexec_file_load`` can be called with a crash
image. It can only be set with a more restrictive value than the
current one.
== ======================================================
-1 Unlimited calls to kexec. This is the default setting.
N Number of calls left.
== ======================================================
kexec_load_limit_reboot
=======================
Similar functionality as ``kexec_load_limit_panic``, but for a normal
image.
kptr_restrict
=============

View File

@@ -231,6 +231,71 @@ proc entries
This feature is intended for systematic testing of faults in a single
system call. See an example below.
Error Injectable Functions
--------------------------
This part is for the kenrel developers considering to add a function to
ALLOW_ERROR_INJECTION() macro.
Requirements for the Error Injectable Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Since the function-level error injection forcibly changes the code path
and returns an error even if the input and conditions are proper, this can
cause unexpected kernel crash if you allow error injection on the function
which is NOT error injectable. Thus, you (and reviewers) must ensure;
- The function returns an error code if it fails, and the callers must check
it correctly (need to recover from it).
- The function does not execute any code which can change any state before
the first error return. The state includes global or local, or input
variable. For example, clear output address storage (e.g. `*ret = NULL`),
increments/decrements counter, set a flag, preempt/irq disable or get
a lock (if those are recovered before returning error, that will be OK.)
The first requirement is important, and it will result in that the release
(free objects) functions are usually harder to inject errors than allocate
functions. If errors of such release functions are not correctly handled
it will cause a memory leak easily (the caller will confuse that the object
has been released or corrupted.)
The second one is for the caller which expects the function should always
does something. Thus if the function error injection skips whole of the
function, the expectation is betrayed and causes an unexpected error.
Type of the Error Injectable Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Each error injectable functions will have the error type specified by the
ALLOW_ERROR_INJECTION() macro. You have to choose it carefully if you add
a new error injectable function. If the wrong error type is chosen, the
kernel may crash because it may not be able to handle the error.
There are 4 types of errors defined in include/asm-generic/error-injection.h
EI_ETYPE_NULL
This function will return `NULL` if it fails. e.g. return an allocateed
object address.
EI_ETYPE_ERRNO
This function will return an `-errno` error code if it fails. e.g. return
-EINVAL if the input is wrong. This will include the functions which will
return an address which encodes `-errno` by ERR_PTR() macro.
EI_ETYPE_ERRNO_NULL
This function will return an `-errno` or `NULL` if it fails. If the caller
of this function checks the return value with IS_ERR_OR_NULL() macro, this
type will be appropriate.
EI_ETYPE_TRUE
This function will return `true` (non-zero positive value) if it fails.
If you specifies a wrong type, for example, EI_TYPE_ERRNO for the function
which returns an allocated object, it may cause a problem because the returned
value is not an object address and the caller can not access to the address.
How to add new fault injection capability
-----------------------------------------

View File

@@ -35,7 +35,7 @@ config HOTPLUG_SMT
bool
config GENERIC_ENTRY
bool
bool
config KPROBES
bool "Kprobes"
@@ -55,26 +55,26 @@ config JUMP_LABEL
depends on HAVE_ARCH_JUMP_LABEL
select OBJTOOL if HAVE_JUMP_LABEL_HACK
help
This option enables a transparent branch optimization that
makes certain almost-always-true or almost-always-false branch
conditions even cheaper to execute within the kernel.
This option enables a transparent branch optimization that
makes certain almost-always-true or almost-always-false branch
conditions even cheaper to execute within the kernel.
Certain performance-sensitive kernel code, such as trace points,
scheduler functionality, networking code and KVM have such
branches and include support for this optimization technique.
Certain performance-sensitive kernel code, such as trace points,
scheduler functionality, networking code and KVM have such
branches and include support for this optimization technique.
If it is detected that the compiler has support for "asm goto",
the kernel will compile such branches with just a nop
instruction. When the condition flag is toggled to true, the
nop will be converted to a jump instruction to execute the
conditional block of instructions.
If it is detected that the compiler has support for "asm goto",
the kernel will compile such branches with just a nop
instruction. When the condition flag is toggled to true, the
nop will be converted to a jump instruction to execute the
conditional block of instructions.
This technique lowers overhead and stress on the branch prediction
of the processor and generally makes the kernel faster. The update
of the condition is slower, but those are always very rare.
This technique lowers overhead and stress on the branch prediction
of the processor and generally makes the kernel faster. The update
of the condition is slower, but those are always very rare.
( On 32-bit x86, the necessary options added to the compiler
flags may increase the size of the kernel slightly. )
( On 32-bit x86, the necessary options added to the compiler
flags may increase the size of the kernel slightly. )
config STATIC_KEYS_SELFTEST
bool "Static key selftest"
@@ -98,9 +98,9 @@ config KPROBES_ON_FTRACE
depends on KPROBES && HAVE_KPROBES_ON_FTRACE
depends on DYNAMIC_FTRACE_WITH_REGS
help
If function tracer is enabled and the arch supports full
passing of pt_regs to function tracing, then kprobes can
optimize on top of function tracing.
If function tracer is enabled and the arch supports full
passing of pt_regs to function tracing, then kprobes can
optimize on top of function tracing.
config UPROBES
def_bool n
@@ -154,21 +154,21 @@ config HAVE_EFFICIENT_UNALIGNED_ACCESS
config ARCH_USE_BUILTIN_BSWAP
bool
help
Modern versions of GCC (since 4.4) have builtin functions
for handling byte-swapping. Using these, instead of the old
inline assembler that the architecture code provides in the
__arch_bswapXX() macros, allows the compiler to see what's
happening and offers more opportunity for optimisation. In
particular, the compiler will be able to combine the byteswap
with a nearby load or store and use load-and-swap or
store-and-swap instructions if the architecture has them. It
should almost *never* result in code which is worse than the
hand-coded assembler in <asm/swab.h>. But just in case it
does, the use of the builtins is optional.
Modern versions of GCC (since 4.4) have builtin functions
for handling byte-swapping. Using these, instead of the old
inline assembler that the architecture code provides in the
__arch_bswapXX() macros, allows the compiler to see what's
happening and offers more opportunity for optimisation. In
particular, the compiler will be able to combine the byteswap
with a nearby load or store and use load-and-swap or
store-and-swap instructions if the architecture has them. It
should almost *never* result in code which is worse than the
hand-coded assembler in <asm/swab.h>. But just in case it
does, the use of the builtins is optional.
Any architecture with load-and-swap or store-and-swap
instructions should set this. And it shouldn't hurt to set it
on architectures that don't have such instructions.
Any architecture with load-and-swap or store-and-swap
instructions should set this. And it shouldn't hurt to set it
on architectures that don't have such instructions.
config KRETPROBES
def_bool y
@@ -720,13 +720,13 @@ config LTO_CLANG_FULL
depends on !COMPILE_TEST
select LTO_CLANG
help
This option enables Clang's full Link Time Optimization (LTO), which
allows the compiler to optimize the kernel globally. If you enable
this option, the compiler generates LLVM bitcode instead of ELF
object files, and the actual compilation from bitcode happens at
the LTO link step, which may take several minutes depending on the
kernel configuration. More information can be found from LLVM's
documentation:
This option enables Clang's full Link Time Optimization (LTO), which
allows the compiler to optimize the kernel globally. If you enable
this option, the compiler generates LLVM bitcode instead of ELF
object files, and the actual compilation from bitcode happens at
the LTO link step, which may take several minutes depending on the
kernel configuration. More information can be found from LLVM's
documentation:
https://llvm.org/docs/LinkTimeOptimization.html
@@ -1330,9 +1330,9 @@ config ARCH_HAS_CC_PLATFORM
bool
config HAVE_SPARSE_SYSCALL_NR
bool
help
An architecture should select this if its syscall numbering is sparse
bool
help
An architecture should select this if its syscall numbering is sparse
to save space. For example, MIPS architecture has a syscall array with
entries at 4000, 5000 and 6000 locations. This option turns on syscall
related optimizations for a given architecture.
@@ -1356,35 +1356,35 @@ config HAVE_PREEMPT_DYNAMIC_CALL
depends on HAVE_STATIC_CALL
select HAVE_PREEMPT_DYNAMIC
help
An architecture should select this if it can handle the preemption
model being selected at boot time using static calls.
An architecture should select this if it can handle the preemption
model being selected at boot time using static calls.
Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a
preemption function will be patched directly.
Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a
preemption function will be patched directly.
Where an architecture does not select HAVE_STATIC_CALL_INLINE, any
call to a preemption function will go through a trampoline, and the
trampoline will be patched.
Where an architecture does not select HAVE_STATIC_CALL_INLINE, any
call to a preemption function will go through a trampoline, and the
trampoline will be patched.
It is strongly advised to support inline static call to avoid any
overhead.
It is strongly advised to support inline static call to avoid any
overhead.
config HAVE_PREEMPT_DYNAMIC_KEY
bool
depends on HAVE_ARCH_JUMP_LABEL
select HAVE_PREEMPT_DYNAMIC
help
An architecture should select this if it can handle the preemption
model being selected at boot time using static keys.
An architecture should select this if it can handle the preemption
model being selected at boot time using static keys.
Each preemption function will be given an early return based on a
static key. This should have slightly lower overhead than non-inline
static calls, as this effectively inlines each trampoline into the
start of its callee. This may avoid redundant work, and may
integrate better with CFI schemes.
Each preemption function will be given an early return based on a
static key. This should have slightly lower overhead than non-inline
static calls, as this effectively inlines each trampoline into the
start of its callee. This may avoid redundant work, and may
integrate better with CFI schemes.
This will have greater overhead than using inline static calls as
the call to the preemption function cannot be entirely elided.
This will have greater overhead than using inline static calls as
the call to the preemption function cannot be entirely elided.
config ARCH_WANT_LD_ORPHAN_WARN
bool
@@ -1407,8 +1407,8 @@ config ARCH_SUPPORTS_PAGE_TABLE_CHECK
config ARCH_SPLIT_ARG64
bool
help
If a 32-bit architecture requires 64-bit arguments to be split into
pairs of 32-bit arguments, select this option.
If a 32-bit architecture requires 64-bit arguments to be split into
pairs of 32-bit arguments, select this option.
config ARCH_HAS_ELFCORE_COMPAT
bool

View File

@@ -73,7 +73,7 @@ struct halt_info {
static void
common_shutdown_1(void *generic_ptr)
{
struct halt_info *how = (struct halt_info *)generic_ptr;
struct halt_info *how = generic_ptr;
struct percpu_struct *cpup;
unsigned long *pflags, flags;
int cpuid = smp_processor_id();

View File

@@ -628,7 +628,7 @@ flush_tlb_all(void)
static void
ipi_flush_tlb_mm(void *x)
{
struct mm_struct *mm = (struct mm_struct *) x;
struct mm_struct *mm = x;
if (mm == current->active_mm && !asn_locked())
flush_tlb_current(mm);
else
@@ -670,7 +670,7 @@ struct flush_tlb_page_struct {
static void
ipi_flush_tlb_page(void *x)
{
struct flush_tlb_page_struct *data = (struct flush_tlb_page_struct *)x;
struct flush_tlb_page_struct *data = x;
struct mm_struct * mm = data->mm;
if (mm == current->active_mm && !asn_locked())

View File

@@ -283,7 +283,7 @@ config ARCH_FORCE_MAX_ORDER
This config option is actually maximum order plus one. For example,
a value of 13 means that the largest free memory block is 2^12 pages.
if SPARC64
if SPARC64 || COMPILE_TEST
source "kernel/power/Kconfig"
endif

View File

@@ -2615,8 +2615,8 @@ static bool emulator_io_port_access_allowed(struct x86_emulate_ctxt *ctxt,
return true;
}
static bool emulator_io_permited(struct x86_emulate_ctxt *ctxt,
u16 port, u16 len)
static bool emulator_io_permitted(struct x86_emulate_ctxt *ctxt,
u16 port, u16 len)
{
if (ctxt->perm_ok)
return true;
@@ -3961,7 +3961,7 @@ static int check_rdpmc(struct x86_emulate_ctxt *ctxt)
static int check_perm_in(struct x86_emulate_ctxt *ctxt)
{
ctxt->dst.bytes = min(ctxt->dst.bytes, 4u);
if (!emulator_io_permited(ctxt, ctxt->src.val, ctxt->dst.bytes))
if (!emulator_io_permitted(ctxt, ctxt->src.val, ctxt->dst.bytes))
return emulate_gp(ctxt, 0);
return X86EMUL_CONTINUE;
@@ -3970,7 +3970,7 @@ static int check_perm_in(struct x86_emulate_ctxt *ctxt)
static int check_perm_out(struct x86_emulate_ctxt *ctxt)
{
ctxt->src.bytes = min(ctxt->src.bytes, 4u);
if (!emulator_io_permited(ctxt, ctxt->dst.val, ctxt->src.bytes))
if (!emulator_io_permitted(ctxt, ctxt->dst.val, ctxt->src.bytes))
return emulate_gp(ctxt, 0);
return X86EMUL_CONTINUE;

View File

@@ -446,7 +446,7 @@ iscsi_iser_conn_create(struct iscsi_cls_session *cls_session,
* @is_leading: indicate if this is the session leading connection (MCS)
*
* Return: zero on success, $error if iscsi_conn_bind fails and
* -EINVAL in case end-point doesn't exsits anymore or iser connection
* -EINVAL in case end-point doesn't exists anymore or iser connection
* state is not UP (teardown already started).
*/
static int iscsi_iser_conn_bind(struct iscsi_cls_session *cls_session,

View File

@@ -38,7 +38,7 @@ config CRAMFS_MTD
default y if !CRAMFS_BLOCKDEV
help
This option allows the CramFs driver to load data directly from
a linear adressed memory range (usually non volatile memory
a linear addressed memory range (usually non-volatile memory
like flash) instead of going through the block device layer.
This saves some memory since no intermediate buffering is
necessary.

View File

@@ -786,11 +786,10 @@ static void ext4_update_bh_state(struct buffer_head *bh, unsigned long flags)
* once we get rid of using bh as a container for mapping information
* to pass to / from get_block functions, this can go away.
*/
old_state = READ_ONCE(bh->b_state);
do {
old_state = READ_ONCE(bh->b_state);
new_state = (old_state & ~EXT4_MAP_FLAGS) | flags;
} while (unlikely(
cmpxchg(&bh->b_state, old_state, new_state) != old_state));
} while (unlikely(!try_cmpxchg(&bh->b_state, &old_state, new_state)));
}
static int _ext4_get_block(struct inode *inode, sector_t iblock,

View File

@@ -200,7 +200,7 @@ static const struct dentry_operations vfat_dentry_ops = {
/* Characters that are undesirable in an MS-DOS file name */
static inline wchar_t vfat_bad_char(wchar_t w)
static inline bool vfat_bad_char(wchar_t w)
{
return (w < 0x0020)
|| (w == '*') || (w == '?') || (w == '<') || (w == '>')
@@ -208,7 +208,7 @@ static inline wchar_t vfat_bad_char(wchar_t w)
|| (w == '\\');
}
static inline wchar_t vfat_replace_char(wchar_t w)
static inline bool vfat_replace_char(wchar_t w)
{
return (w == '[') || (w == ']') || (w == ';') || (w == ',')
|| (w == '+') || (w == '=');

View File

@@ -31,7 +31,7 @@ vxfs_put_page(struct page *pp)
/**
* vxfs_get_page - read a page into memory.
* @ip: inode to read from
* @mapping: mapping to read from
* @n: page number
*
* Description:
@@ -81,14 +81,14 @@ vxfs_bread(struct inode *ip, int block)
}
/**
* vxfs_get_block - locate buffer for given inode,block tuple
* vxfs_getblk - locate buffer for given inode,block tuple
* @ip: inode
* @iblock: logical block
* @bp: buffer skeleton
* @create: %TRUE if blocks may be newly allocated.
*
* Description:
* The vxfs_get_block function fills @bp with the right physical
* The vxfs_getblk function fills @bp with the right physical
* block and device number to perform a lowlevel read/write on
* it.
*

View File

@@ -165,7 +165,7 @@ static int vxfs_try_sb_magic(struct super_block *sbp, int silent,
}
/**
* vxfs_read_super - read superblock into memory and initialize filesystem
* vxfs_fill_super - read superblock into memory and initialize filesystem
* @sbp: VFS superblock (to fill)
* @dp: fs private mount data
* @silent: do not complain loudly when sth is wrong

View File

@@ -274,6 +274,7 @@ static struct hfs_bnode *__hfs_bnode_create(struct hfs_btree *tree, u32 cnid)
tree->node_hash[hash] = node;
tree->node_hash_cnt++;
} else {
hfs_bnode_get(node2);
spin_unlock(&tree->hash_lock);
kfree(node);
wait_event(node2->lock_wq, !test_bit(HFS_BNODE_NEW, &node2->flags));

View File

@@ -486,7 +486,7 @@ void hfs_file_truncate(struct inode *inode)
inode->i_size);
if (inode->i_size > HFS_I(inode)->phys_size) {
struct address_space *mapping = inode->i_mapping;
void *fsdata;
void *fsdata = NULL;
struct page *page;
/* XXX: Can use generic_cont_expand? */

View File

@@ -554,7 +554,7 @@ void hfsplus_file_truncate(struct inode *inode)
if (inode->i_size > hip->phys_size) {
struct address_space *mapping = inode->i_mapping;
struct page *page;
void *fsdata;
void *fsdata = NULL;
loff_t size = inode->i_size;
res = hfsplus_write_begin(NULL, mapping, size, 0,

View File

@@ -257,7 +257,7 @@ end_attr_file_creation:
int __hfsplus_setxattr(struct inode *inode, const char *name,
const void *value, size_t size, int flags)
{
int err = 0;
int err;
struct hfs_find_data cat_fd;
hfsplus_cat_entry entry;
u16 cat_entry_flags, cat_entry_type;
@@ -494,7 +494,7 @@ ssize_t __hfsplus_getxattr(struct inode *inode, const char *name,
__be32 xattr_record_type;
u32 record_type;
u16 record_length = 0;
ssize_t res = 0;
ssize_t res;
if ((!S_ISREG(inode->i_mode) &&
!S_ISDIR(inode->i_mode)) ||
@@ -606,7 +606,7 @@ static inline int can_list(const char *xattr_name)
static ssize_t hfsplus_listxattr_finder_info(struct dentry *dentry,
char *buffer, size_t size)
{
ssize_t res = 0;
ssize_t res;
struct inode *inode = d_inode(dentry);
struct hfs_find_data fd;
u16 entry_type;
@@ -674,10 +674,9 @@ end_listxattr_finder_info:
ssize_t hfsplus_listxattr(struct dentry *dentry, char *buffer, size_t size)
{
ssize_t err;
ssize_t res = 0;
ssize_t res;
struct inode *inode = d_inode(dentry);
struct hfs_find_data fd;
u16 key_len = 0;
struct hfsplus_attr_key attr_key;
char *strbuf;
int xattr_name_len;
@@ -719,7 +718,8 @@ ssize_t hfsplus_listxattr(struct dentry *dentry, char *buffer, size_t size)
}
for (;;) {
key_len = hfs_bnode_read_u16(fd.bnode, fd.keyoffset);
u16 key_len = hfs_bnode_read_u16(fd.bnode, fd.keyoffset);
if (key_len == 0 || key_len > fd.tree->max_key_len) {
pr_err("invalid xattr key length: %d\n", key_len);
res = -EIO;
@@ -766,12 +766,12 @@ out:
static int hfsplus_removexattr(struct inode *inode, const char *name)
{
int err = 0;
int err;
struct hfs_find_data cat_fd;
u16 flags;
u16 cat_entry_type;
int is_xattr_acl_deleted = 0;
int is_all_xattrs_deleted = 0;
int is_xattr_acl_deleted;
int is_all_xattrs_deleted;
if (!HFSPLUS_SB(inode->i_sb)->attr_tree)
return -EOPNOTSUPP;

View File

@@ -40,8 +40,21 @@ static inline struct nilfs_dat_info *NILFS_DAT_I(struct inode *dat)
static int nilfs_dat_prepare_entry(struct inode *dat,
struct nilfs_palloc_req *req, int create)
{
return nilfs_palloc_get_entry_block(dat, req->pr_entry_nr,
create, &req->pr_entry_bh);
int ret;
ret = nilfs_palloc_get_entry_block(dat, req->pr_entry_nr,
create, &req->pr_entry_bh);
if (unlikely(ret == -ENOENT)) {
nilfs_err(dat->i_sb,
"DAT doesn't have a block to manage vblocknr = %llu",
(unsigned long long)req->pr_entry_nr);
/*
* Return internal code -EINVAL to notify bmap layer of
* metadata corruption.
*/
ret = -EINVAL;
}
return ret;
}
static void nilfs_dat_commit_entry(struct inode *dat,
@@ -123,11 +136,7 @@ static void nilfs_dat_commit_free(struct inode *dat,
int nilfs_dat_prepare_start(struct inode *dat, struct nilfs_palloc_req *req)
{
int ret;
ret = nilfs_dat_prepare_entry(dat, req, 0);
WARN_ON(ret == -ENOENT);
return ret;
return nilfs_dat_prepare_entry(dat, req, 0);
}
void nilfs_dat_commit_start(struct inode *dat, struct nilfs_palloc_req *req,
@@ -149,19 +158,19 @@ void nilfs_dat_commit_start(struct inode *dat, struct nilfs_palloc_req *req,
int nilfs_dat_prepare_end(struct inode *dat, struct nilfs_palloc_req *req)
{
struct nilfs_dat_entry *entry;
__u64 start;
sector_t blocknr;
void *kaddr;
int ret;
ret = nilfs_dat_prepare_entry(dat, req, 0);
if (ret < 0) {
WARN_ON(ret == -ENOENT);
if (ret < 0)
return ret;
}
kaddr = kmap_atomic(req->pr_entry_bh->b_page);
entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr,
req->pr_entry_bh, kaddr);
start = le64_to_cpu(entry->de_start);
blocknr = le64_to_cpu(entry->de_blocknr);
kunmap_atomic(kaddr);
@@ -172,6 +181,15 @@ int nilfs_dat_prepare_end(struct inode *dat, struct nilfs_palloc_req *req)
return ret;
}
}
if (unlikely(start > nilfs_mdt_cno(dat))) {
nilfs_err(dat->i_sb,
"vblocknr = %llu has abnormal lifetime: start cno (= %llu) > current cno (= %llu)",
(unsigned long long)req->pr_entry_nr,
(unsigned long long)start,
(unsigned long long)nilfs_mdt_cno(dat));
nilfs_dat_abort_entry(dat, req);
return -EINVAL;
}
return 0;
}

View File

@@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/**
/*
* aops.c - NTFS kernel address space operations and page cache handling.
*
* Copyright (c) 2001-2014 Anton Altaparmakov and Tuxera Inc.
@@ -1646,7 +1646,7 @@ hole:
return block;
}
/**
/*
* ntfs_normal_aops - address space operations for normal inodes and attributes
*
* Note these are not used for compressed or mst protected inodes and
@@ -1664,7 +1664,7 @@ const struct address_space_operations ntfs_normal_aops = {
.error_remove_page = generic_error_remove_page,
};
/**
/*
* ntfs_compressed_aops - address space operations for compressed inodes
*/
const struct address_space_operations ntfs_compressed_aops = {
@@ -1678,9 +1678,9 @@ const struct address_space_operations ntfs_compressed_aops = {
.error_remove_page = generic_error_remove_page,
};
/**
/*
* ntfs_mst_aops - general address space operations for mst protecteed inodes
* and attributes
* and attributes
*/
const struct address_space_operations ntfs_mst_aops = {
.read_folio = ntfs_read_folio, /* Fill page with data. */

Some files were not shown because too many files have changed in this diff Show More