mirror of
https://github.com/armbian/linux-cix.git
synced 2026-01-06 12:30:45 -08:00
Daniel Borkmann says:
====================
bpf-next 2022-07-22
We've added 73 non-merge commits during the last 12 day(s) which contain
a total of 88 files changed, 3458 insertions(+), 860 deletions(-).
The main changes are:
1) Implement BPF trampoline for arm64 JIT, from Xu Kuohai.
2) Add ksyscall/kretsyscall section support to libbpf to simplify tracing kernel
syscalls through kprobe mechanism, from Andrii Nakryiko.
3) Allow for livepatch (KLP) and BPF trampolines to attach to the same kernel
function, from Song Liu & Jiri Olsa.
4) Add new kfunc infrastructure for netfilter's CT e.g. to insert and change
entries, from Kumar Kartikeya Dwivedi & Lorenzo Bianconi.
5) Add a ksym BPF iterator to allow for more flexible and efficient interactions
with kernel symbols, from Alan Maguire.
6) Bug fixes in libbpf e.g. for uprobe binary path resolution, from Dan Carpenter.
7) Fix BPF subprog function names in stack traces, from Alexei Starovoitov.
8) libbpf support for writing custom perf event readers, from Jon Doron.
9) Switch to use SPDX tag for BPF helper man page, from Alejandro Colomar.
10) Fix xsk send-only sockets when in busy poll mode, from Maciej Fijalkowski.
11) Reparent BPF maps and their charging on memcg offlining, from Roman Gushchin.
12) Multiple follow-up fixes around BPF lsm cgroup infra, from Stanislav Fomichev.
13) Use bootstrap version of bpftool where possible to speed up builds, from Pu Lehui.
14) Cleanup BPF verifier's check_func_arg() handling, from Joanne Koong.
15) Make non-prealloced BPF map allocations low priority to play better with
memcg limits, from Yafang Shao.
16) Fix BPF test runner to reject zero-length data for skbs, from Zhengchao Shao.
17) Various smaller cleanups and improvements all over the place.
* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (73 commits)
bpf: Simplify bpf_prog_pack_[size|mask]
bpf: Support bpf_trampoline on functions with IPMODIFY (e.g. livepatch)
bpf, x64: Allow to use caller address from stack
ftrace: Allow IPMODIFY and DIRECT ops on the same function
ftrace: Add modify_ftrace_direct_multi_nolock
bpf/selftests: Fix couldn't retrieve pinned program in xdp veth test
bpf: Fix build error in case of !CONFIG_DEBUG_INFO_BTF
selftests/bpf: Fix test_verifier failed test in unprivileged mode
selftests/bpf: Add negative tests for new nf_conntrack kfuncs
selftests/bpf: Add tests for new nf_conntrack kfuncs
selftests/bpf: Add verifier tests for trusted kfunc args
net: netfilter: Add kfuncs to set and change CT status
net: netfilter: Add kfuncs to set and change CT timeout
net: netfilter: Add kfuncs to allocate and insert CT
net: netfilter: Deduplicate code in bpf_{xdp,skb}_ct_lookup
bpf: Add documentation for kfuncs
bpf: Add support for forcing kfunc args to be trusted
bpf: Switch to new kfunc flags infrastructure
tools/resolve_btfids: Add support for 8-byte BTF sets
bpf: Introduce 8-byte BTF set
...
====================
Link: https://lore.kernel.org/r/20220722221218.29943-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
@@ -369,7 +369,8 @@ No additional type data follow ``btf_type``.
|
||||
* ``name_off``: offset to a valid C identifier
|
||||
* ``info.kind_flag``: 0
|
||||
* ``info.kind``: BTF_KIND_FUNC
|
||||
* ``info.vlen``: 0
|
||||
* ``info.vlen``: linkage information (BTF_FUNC_STATIC, BTF_FUNC_GLOBAL
|
||||
or BTF_FUNC_EXTERN)
|
||||
* ``type``: a BTF_KIND_FUNC_PROTO type
|
||||
|
||||
No additional type data follow ``btf_type``.
|
||||
@@ -380,6 +381,9 @@ type. The BTF_KIND_FUNC may in turn be referenced by a func_info in the
|
||||
:ref:`BTF_Ext_Section` (ELF) or in the arguments to :ref:`BPF_Prog_Load`
|
||||
(ABI).
|
||||
|
||||
Currently, only linkage values of BTF_FUNC_STATIC and BTF_FUNC_GLOBAL are
|
||||
supported in the kernel.
|
||||
|
||||
2.2.13 BTF_KIND_FUNC_PROTO
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
||||
@@ -19,6 +19,7 @@ that goes into great technical depth about the BPF Architecture.
|
||||
faq
|
||||
syscall_api
|
||||
helpers
|
||||
kfuncs
|
||||
programs
|
||||
maps
|
||||
bpf_prog_run
|
||||
|
||||
170
Documentation/bpf/kfuncs.rst
Normal file
170
Documentation/bpf/kfuncs.rst
Normal file
@@ -0,0 +1,170 @@
|
||||
=============================
|
||||
BPF Kernel Functions (kfuncs)
|
||||
=============================
|
||||
|
||||
1. Introduction
|
||||
===============
|
||||
|
||||
BPF Kernel Functions or more commonly known as kfuncs are functions in the Linux
|
||||
kernel which are exposed for use by BPF programs. Unlike normal BPF helpers,
|
||||
kfuncs do not have a stable interface and can change from one kernel release to
|
||||
another. Hence, BPF programs need to be updated in response to changes in the
|
||||
kernel.
|
||||
|
||||
2. Defining a kfunc
|
||||
===================
|
||||
|
||||
There are two ways to expose a kernel function to BPF programs, either make an
|
||||
existing function in the kernel visible, or add a new wrapper for BPF. In both
|
||||
cases, care must be taken that BPF program can only call such function in a
|
||||
valid context. To enforce this, visibility of a kfunc can be per program type.
|
||||
|
||||
If you are not creating a BPF wrapper for existing kernel function, skip ahead
|
||||
to :ref:`BPF_kfunc_nodef`.
|
||||
|
||||
2.1 Creating a wrapper kfunc
|
||||
----------------------------
|
||||
|
||||
When defining a wrapper kfunc, the wrapper function should have extern linkage.
|
||||
This prevents the compiler from optimizing away dead code, as this wrapper kfunc
|
||||
is not invoked anywhere in the kernel itself. It is not necessary to provide a
|
||||
prototype in a header for the wrapper kfunc.
|
||||
|
||||
An example is given below::
|
||||
|
||||
/* Disables missing prototype warnings */
|
||||
__diag_push();
|
||||
__diag_ignore_all("-Wmissing-prototypes",
|
||||
"Global kfuncs as their definitions will be in BTF");
|
||||
|
||||
struct task_struct *bpf_find_get_task_by_vpid(pid_t nr)
|
||||
{
|
||||
return find_get_task_by_vpid(nr);
|
||||
}
|
||||
|
||||
__diag_pop();
|
||||
|
||||
A wrapper kfunc is often needed when we need to annotate parameters of the
|
||||
kfunc. Otherwise one may directly make the kfunc visible to the BPF program by
|
||||
registering it with the BPF subsystem. See :ref:`BPF_kfunc_nodef`.
|
||||
|
||||
2.2 Annotating kfunc parameters
|
||||
-------------------------------
|
||||
|
||||
Similar to BPF helpers, there is sometime need for additional context required
|
||||
by the verifier to make the usage of kernel functions safer and more useful.
|
||||
Hence, we can annotate a parameter by suffixing the name of the argument of the
|
||||
kfunc with a __tag, where tag may be one of the supported annotations.
|
||||
|
||||
2.2.1 __sz Annotation
|
||||
---------------------
|
||||
|
||||
This annotation is used to indicate a memory and size pair in the argument list.
|
||||
An example is given below::
|
||||
|
||||
void bpf_memzero(void *mem, int mem__sz)
|
||||
{
|
||||
...
|
||||
}
|
||||
|
||||
Here, the verifier will treat first argument as a PTR_TO_MEM, and second
|
||||
argument as its size. By default, without __sz annotation, the size of the type
|
||||
of the pointer is used. Without __sz annotation, a kfunc cannot accept a void
|
||||
pointer.
|
||||
|
||||
.. _BPF_kfunc_nodef:
|
||||
|
||||
2.3 Using an existing kernel function
|
||||
-------------------------------------
|
||||
|
||||
When an existing function in the kernel is fit for consumption by BPF programs,
|
||||
it can be directly registered with the BPF subsystem. However, care must still
|
||||
be taken to review the context in which it will be invoked by the BPF program
|
||||
and whether it is safe to do so.
|
||||
|
||||
2.4 Annotating kfuncs
|
||||
---------------------
|
||||
|
||||
In addition to kfuncs' arguments, verifier may need more information about the
|
||||
type of kfunc(s) being registered with the BPF subsystem. To do so, we define
|
||||
flags on a set of kfuncs as follows::
|
||||
|
||||
BTF_SET8_START(bpf_task_set)
|
||||
BTF_ID_FLAGS(func, bpf_get_task_pid, KF_ACQUIRE | KF_RET_NULL)
|
||||
BTF_ID_FLAGS(func, bpf_put_pid, KF_RELEASE)
|
||||
BTF_SET8_END(bpf_task_set)
|
||||
|
||||
This set encodes the BTF ID of each kfunc listed above, and encodes the flags
|
||||
along with it. Ofcourse, it is also allowed to specify no flags.
|
||||
|
||||
2.4.1 KF_ACQUIRE flag
|
||||
---------------------
|
||||
|
||||
The KF_ACQUIRE flag is used to indicate that the kfunc returns a pointer to a
|
||||
refcounted object. The verifier will then ensure that the pointer to the object
|
||||
is eventually released using a release kfunc, or transferred to a map using a
|
||||
referenced kptr (by invoking bpf_kptr_xchg). If not, the verifier fails the
|
||||
loading of the BPF program until no lingering references remain in all possible
|
||||
explored states of the program.
|
||||
|
||||
2.4.2 KF_RET_NULL flag
|
||||
----------------------
|
||||
|
||||
The KF_RET_NULL flag is used to indicate that the pointer returned by the kfunc
|
||||
may be NULL. Hence, it forces the user to do a NULL check on the pointer
|
||||
returned from the kfunc before making use of it (dereferencing or passing to
|
||||
another helper). This flag is often used in pairing with KF_ACQUIRE flag, but
|
||||
both are orthogonal to each other.
|
||||
|
||||
2.4.3 KF_RELEASE flag
|
||||
---------------------
|
||||
|
||||
The KF_RELEASE flag is used to indicate that the kfunc releases the pointer
|
||||
passed in to it. There can be only one referenced pointer that can be passed in.
|
||||
All copies of the pointer being released are invalidated as a result of invoking
|
||||
kfunc with this flag.
|
||||
|
||||
2.4.4 KF_KPTR_GET flag
|
||||
----------------------
|
||||
|
||||
The KF_KPTR_GET flag is used to indicate that the kfunc takes the first argument
|
||||
as a pointer to kptr, safely increments the refcount of the object it points to,
|
||||
and returns a reference to the user. The rest of the arguments may be normal
|
||||
arguments of a kfunc. The KF_KPTR_GET flag should be used in conjunction with
|
||||
KF_ACQUIRE and KF_RET_NULL flags.
|
||||
|
||||
2.4.5 KF_TRUSTED_ARGS flag
|
||||
--------------------------
|
||||
|
||||
The KF_TRUSTED_ARGS flag is used for kfuncs taking pointer arguments. It
|
||||
indicates that the all pointer arguments will always be refcounted, and have
|
||||
their offset set to 0. It can be used to enforce that a pointer to a refcounted
|
||||
object acquired from a kfunc or BPF helper is passed as an argument to this
|
||||
kfunc without any modifications (e.g. pointer arithmetic) such that it is
|
||||
trusted and points to the original object. This flag is often used for kfuncs
|
||||
that operate (change some property, perform some operation) on an object that
|
||||
was obtained using an acquire kfunc. Such kfuncs need an unchanged pointer to
|
||||
ensure the integrity of the operation being performed on the expected object.
|
||||
|
||||
2.5 Registering the kfuncs
|
||||
--------------------------
|
||||
|
||||
Once the kfunc is prepared for use, the final step to making it visible is
|
||||
registering it with the BPF subsystem. Registration is done per BPF program
|
||||
type. An example is shown below::
|
||||
|
||||
BTF_SET8_START(bpf_task_set)
|
||||
BTF_ID_FLAGS(func, bpf_get_task_pid, KF_ACQUIRE | KF_RET_NULL)
|
||||
BTF_ID_FLAGS(func, bpf_put_pid, KF_RELEASE)
|
||||
BTF_SET8_END(bpf_task_set)
|
||||
|
||||
static const struct btf_kfunc_id_set bpf_task_kfunc_set = {
|
||||
.owner = THIS_MODULE,
|
||||
.set = &bpf_task_set,
|
||||
};
|
||||
|
||||
static int init_subsystem(void)
|
||||
{
|
||||
return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_task_kfunc_set);
|
||||
}
|
||||
late_initcall(init_subsystem);
|
||||
185
Documentation/bpf/map_hash.rst
Normal file
185
Documentation/bpf/map_hash.rst
Normal file
@@ -0,0 +1,185 @@
|
||||
.. SPDX-License-Identifier: GPL-2.0-only
|
||||
.. Copyright (C) 2022 Red Hat, Inc.
|
||||
|
||||
===============================================
|
||||
BPF_MAP_TYPE_HASH, with PERCPU and LRU Variants
|
||||
===============================================
|
||||
|
||||
.. note::
|
||||
- ``BPF_MAP_TYPE_HASH`` was introduced in kernel version 3.19
|
||||
- ``BPF_MAP_TYPE_PERCPU_HASH`` was introduced in version 4.6
|
||||
- Both ``BPF_MAP_TYPE_LRU_HASH`` and ``BPF_MAP_TYPE_LRU_PERCPU_HASH``
|
||||
were introduced in version 4.10
|
||||
|
||||
``BPF_MAP_TYPE_HASH`` and ``BPF_MAP_TYPE_PERCPU_HASH`` provide general
|
||||
purpose hash map storage. Both the key and the value can be structs,
|
||||
allowing for composite keys and values.
|
||||
|
||||
The kernel is responsible for allocating and freeing key/value pairs, up
|
||||
to the max_entries limit that you specify. Hash maps use pre-allocation
|
||||
of hash table elements by default. The ``BPF_F_NO_PREALLOC`` flag can be
|
||||
used to disable pre-allocation when it is too memory expensive.
|
||||
|
||||
``BPF_MAP_TYPE_PERCPU_HASH`` provides a separate value slot per
|
||||
CPU. The per-cpu values are stored internally in an array.
|
||||
|
||||
The ``BPF_MAP_TYPE_LRU_HASH`` and ``BPF_MAP_TYPE_LRU_PERCPU_HASH``
|
||||
variants add LRU semantics to their respective hash tables. An LRU hash
|
||||
will automatically evict the least recently used entries when the hash
|
||||
table reaches capacity. An LRU hash maintains an internal LRU list that
|
||||
is used to select elements for eviction. This internal LRU list is
|
||||
shared across CPUs but it is possible to request a per CPU LRU list with
|
||||
the ``BPF_F_NO_COMMON_LRU`` flag when calling ``bpf_map_create``.
|
||||
|
||||
Usage
|
||||
=====
|
||||
|
||||
.. c:function::
|
||||
long bpf_map_update_elem(struct bpf_map *map, const void *key, const void *value, u64 flags)
|
||||
|
||||
Hash entries can be added or updated using the ``bpf_map_update_elem()``
|
||||
helper. This helper replaces existing elements atomically. The ``flags``
|
||||
parameter can be used to control the update behaviour:
|
||||
|
||||
- ``BPF_ANY`` will create a new element or update an existing element
|
||||
- ``BPF_NOEXIST`` will create a new element only if one did not already
|
||||
exist
|
||||
- ``BPF_EXIST`` will update an existing element
|
||||
|
||||
``bpf_map_update_elem()`` returns 0 on success, or negative error in
|
||||
case of failure.
|
||||
|
||||
.. c:function::
|
||||
void *bpf_map_lookup_elem(struct bpf_map *map, const void *key)
|
||||
|
||||
Hash entries can be retrieved using the ``bpf_map_lookup_elem()``
|
||||
helper. This helper returns a pointer to the value associated with
|
||||
``key``, or ``NULL`` if no entry was found.
|
||||
|
||||
.. c:function::
|
||||
long bpf_map_delete_elem(struct bpf_map *map, const void *key)
|
||||
|
||||
Hash entries can be deleted using the ``bpf_map_delete_elem()``
|
||||
helper. This helper will return 0 on success, or negative error in case
|
||||
of failure.
|
||||
|
||||
Per CPU Hashes
|
||||
--------------
|
||||
|
||||
For ``BPF_MAP_TYPE_PERCPU_HASH`` and ``BPF_MAP_TYPE_LRU_PERCPU_HASH``
|
||||
the ``bpf_map_update_elem()`` and ``bpf_map_lookup_elem()`` helpers
|
||||
automatically access the hash slot for the current CPU.
|
||||
|
||||
.. c:function::
|
||||
void *bpf_map_lookup_percpu_elem(struct bpf_map *map, const void *key, u32 cpu)
|
||||
|
||||
The ``bpf_map_lookup_percpu_elem()`` helper can be used to lookup the
|
||||
value in the hash slot for a specific CPU. Returns value associated with
|
||||
``key`` on ``cpu`` , or ``NULL`` if no entry was found or ``cpu`` is
|
||||
invalid.
|
||||
|
||||
Concurrency
|
||||
-----------
|
||||
|
||||
Values stored in ``BPF_MAP_TYPE_HASH`` can be accessed concurrently by
|
||||
programs running on different CPUs. Since Kernel version 5.1, the BPF
|
||||
infrastructure provides ``struct bpf_spin_lock`` to synchronise access.
|
||||
See ``tools/testing/selftests/bpf/progs/test_spin_lock.c``.
|
||||
|
||||
Userspace
|
||||
---------
|
||||
|
||||
.. c:function::
|
||||
int bpf_map_get_next_key(int fd, const void *cur_key, void *next_key)
|
||||
|
||||
In userspace, it is possible to iterate through the keys of a hash using
|
||||
libbpf's ``bpf_map_get_next_key()`` function. The first key can be fetched by
|
||||
calling ``bpf_map_get_next_key()`` with ``cur_key`` set to
|
||||
``NULL``. Subsequent calls will fetch the next key that follows the
|
||||
current key. ``bpf_map_get_next_key()`` returns 0 on success, -ENOENT if
|
||||
cur_key is the last key in the hash, or negative error in case of
|
||||
failure.
|
||||
|
||||
Note that if ``cur_key`` gets deleted then ``bpf_map_get_next_key()``
|
||||
will instead return the *first* key in the hash table which is
|
||||
undesirable. It is recommended to use batched lookup if there is going
|
||||
to be key deletion intermixed with ``bpf_map_get_next_key()``.
|
||||
|
||||
Examples
|
||||
========
|
||||
|
||||
Please see the ``tools/testing/selftests/bpf`` directory for functional
|
||||
examples. The code snippets below demonstrates API usage.
|
||||
|
||||
This example shows how to declare an LRU Hash with a struct key and a
|
||||
struct value.
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
#include <linux/bpf.h>
|
||||
#include <bpf/bpf_helpers.h>
|
||||
|
||||
struct key {
|
||||
__u32 srcip;
|
||||
};
|
||||
|
||||
struct value {
|
||||
__u64 packets;
|
||||
__u64 bytes;
|
||||
};
|
||||
|
||||
struct {
|
||||
__uint(type, BPF_MAP_TYPE_LRU_HASH);
|
||||
__uint(max_entries, 32);
|
||||
__type(key, struct key);
|
||||
__type(value, struct value);
|
||||
} packet_stats SEC(".maps");
|
||||
|
||||
This example shows how to create or update hash values using atomic
|
||||
instructions:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
static void update_stats(__u32 srcip, int bytes)
|
||||
{
|
||||
struct key key = {
|
||||
.srcip = srcip,
|
||||
};
|
||||
struct value *value = bpf_map_lookup_elem(&packet_stats, &key);
|
||||
|
||||
if (value) {
|
||||
__sync_fetch_and_add(&value->packets, 1);
|
||||
__sync_fetch_and_add(&value->bytes, bytes);
|
||||
} else {
|
||||
struct value newval = { 1, bytes };
|
||||
|
||||
bpf_map_update_elem(&packet_stats, &key, &newval, BPF_NOEXIST);
|
||||
}
|
||||
}
|
||||
|
||||
Userspace walking the map elements from the map declared above:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
#include <bpf/libbpf.h>
|
||||
#include <bpf/bpf.h>
|
||||
|
||||
static void walk_hash_elements(int map_fd)
|
||||
{
|
||||
struct key *cur_key = NULL;
|
||||
struct key next_key;
|
||||
struct value value;
|
||||
int err;
|
||||
|
||||
for (;;) {
|
||||
err = bpf_map_get_next_key(map_fd, cur_key, &next_key);
|
||||
if (err)
|
||||
break;
|
||||
|
||||
bpf_map_lookup_elem(map_fd, &next_key, &value);
|
||||
|
||||
// Use key and value here
|
||||
|
||||
cur_key = &next_key;
|
||||
}
|
||||
}
|
||||
@@ -510,6 +510,9 @@ u32 aarch64_insn_gen_load_store_imm(enum aarch64_insn_register reg,
|
||||
unsigned int imm,
|
||||
enum aarch64_insn_size_type size,
|
||||
enum aarch64_insn_ldst_type type);
|
||||
u32 aarch64_insn_gen_load_literal(unsigned long pc, unsigned long addr,
|
||||
enum aarch64_insn_register reg,
|
||||
bool is64bit);
|
||||
u32 aarch64_insn_gen_load_store_pair(enum aarch64_insn_register reg1,
|
||||
enum aarch64_insn_register reg2,
|
||||
enum aarch64_insn_register base,
|
||||
|
||||
@@ -323,7 +323,7 @@ static u32 aarch64_insn_encode_ldst_size(enum aarch64_insn_size_type type,
|
||||
return insn;
|
||||
}
|
||||
|
||||
static inline long branch_imm_common(unsigned long pc, unsigned long addr,
|
||||
static inline long label_imm_common(unsigned long pc, unsigned long addr,
|
||||
long range)
|
||||
{
|
||||
long offset;
|
||||
@@ -354,7 +354,7 @@ u32 __kprobes aarch64_insn_gen_branch_imm(unsigned long pc, unsigned long addr,
|
||||
* ARM64 virtual address arrangement guarantees all kernel and module
|
||||
* texts are within +/-128M.
|
||||
*/
|
||||
offset = branch_imm_common(pc, addr, SZ_128M);
|
||||
offset = label_imm_common(pc, addr, SZ_128M);
|
||||
if (offset >= SZ_128M)
|
||||
return AARCH64_BREAK_FAULT;
|
||||
|
||||
@@ -382,7 +382,7 @@ u32 aarch64_insn_gen_comp_branch_imm(unsigned long pc, unsigned long addr,
|
||||
u32 insn;
|
||||
long offset;
|
||||
|
||||
offset = branch_imm_common(pc, addr, SZ_1M);
|
||||
offset = label_imm_common(pc, addr, SZ_1M);
|
||||
if (offset >= SZ_1M)
|
||||
return AARCH64_BREAK_FAULT;
|
||||
|
||||
@@ -421,7 +421,7 @@ u32 aarch64_insn_gen_cond_branch_imm(unsigned long pc, unsigned long addr,
|
||||
u32 insn;
|
||||
long offset;
|
||||
|
||||
offset = branch_imm_common(pc, addr, SZ_1M);
|
||||
offset = label_imm_common(pc, addr, SZ_1M);
|
||||
|
||||
insn = aarch64_insn_get_bcond_value();
|
||||
|
||||
@@ -543,6 +543,28 @@ u32 aarch64_insn_gen_load_store_imm(enum aarch64_insn_register reg,
|
||||
return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_12, insn, imm);
|
||||
}
|
||||
|
||||
u32 aarch64_insn_gen_load_literal(unsigned long pc, unsigned long addr,
|
||||
enum aarch64_insn_register reg,
|
||||
bool is64bit)
|
||||
{
|
||||
u32 insn;
|
||||
long offset;
|
||||
|
||||
offset = label_imm_common(pc, addr, SZ_1M);
|
||||
if (offset >= SZ_1M)
|
||||
return AARCH64_BREAK_FAULT;
|
||||
|
||||
insn = aarch64_insn_get_ldr_lit_value();
|
||||
|
||||
if (is64bit)
|
||||
insn |= BIT(30);
|
||||
|
||||
insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT, insn, reg);
|
||||
|
||||
return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_19, insn,
|
||||
offset >> 2);
|
||||
}
|
||||
|
||||
u32 aarch64_insn_gen_load_store_pair(enum aarch64_insn_register reg1,
|
||||
enum aarch64_insn_register reg2,
|
||||
enum aarch64_insn_register base,
|
||||
|
||||
@@ -80,6 +80,12 @@
|
||||
#define A64_STR64I(Xt, Xn, imm) A64_LS_IMM(Xt, Xn, imm, 64, STORE)
|
||||
#define A64_LDR64I(Xt, Xn, imm) A64_LS_IMM(Xt, Xn, imm, 64, LOAD)
|
||||
|
||||
/* LDR (literal) */
|
||||
#define A64_LDR32LIT(Wt, offset) \
|
||||
aarch64_insn_gen_load_literal(0, offset, Wt, false)
|
||||
#define A64_LDR64LIT(Xt, offset) \
|
||||
aarch64_insn_gen_load_literal(0, offset, Xt, true)
|
||||
|
||||
/* Load/store register pair */
|
||||
#define A64_LS_PAIR(Rt, Rt2, Rn, offset, ls, type) \
|
||||
aarch64_insn_gen_load_store_pair(Rt, Rt2, Rn, offset, \
|
||||
@@ -270,6 +276,7 @@
|
||||
#define A64_BTI_C A64_HINT(AARCH64_INSN_HINT_BTIC)
|
||||
#define A64_BTI_J A64_HINT(AARCH64_INSN_HINT_BTIJ)
|
||||
#define A64_BTI_JC A64_HINT(AARCH64_INSN_HINT_BTIJC)
|
||||
#define A64_NOP A64_HINT(AARCH64_INSN_HINT_NOP)
|
||||
|
||||
/* DMB */
|
||||
#define A64_DMB_ISH aarch64_insn_gen_dmb(AARCH64_INSN_MB_ISH)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1950,23 +1950,6 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool is_valid_bpf_tramp_flags(unsigned int flags)
|
||||
{
|
||||
if ((flags & BPF_TRAMP_F_RESTORE_REGS) &&
|
||||
(flags & BPF_TRAMP_F_SKIP_FRAME))
|
||||
return false;
|
||||
|
||||
/*
|
||||
* BPF_TRAMP_F_RET_FENTRY_RET is only used by bpf_struct_ops,
|
||||
* and it must be used alone.
|
||||
*/
|
||||
if ((flags & BPF_TRAMP_F_RET_FENTRY_RET) &&
|
||||
(flags & ~BPF_TRAMP_F_RET_FENTRY_RET))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/* Example:
|
||||
* __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev);
|
||||
* its 'struct btf_func_model' will be nr_args=2
|
||||
@@ -2045,9 +2028,6 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
|
||||
if (nr_args > 6)
|
||||
return -ENOTSUPP;
|
||||
|
||||
if (!is_valid_bpf_tramp_flags(flags))
|
||||
return -EINVAL;
|
||||
|
||||
/* Generated trampoline stack layout:
|
||||
*
|
||||
* RBP + 8 [ return address ]
|
||||
@@ -2153,10 +2133,15 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
|
||||
if (flags & BPF_TRAMP_F_CALL_ORIG) {
|
||||
restore_regs(m, &prog, nr_args, regs_off);
|
||||
|
||||
/* call original function */
|
||||
if (emit_call(&prog, orig_call, prog)) {
|
||||
ret = -EINVAL;
|
||||
goto cleanup;
|
||||
if (flags & BPF_TRAMP_F_ORIG_STACK) {
|
||||
emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
|
||||
EMIT2(0xff, 0xd0); /* call *rax */
|
||||
} else {
|
||||
/* call original function */
|
||||
if (emit_call(&prog, orig_call, prog)) {
|
||||
ret = -EINVAL;
|
||||
goto cleanup;
|
||||
}
|
||||
}
|
||||
/* remember return value in a stack for bpf prog to access */
|
||||
emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
|
||||
@@ -2520,3 +2505,28 @@ bool bpf_jit_supports_subprog_tailcalls(void)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
void bpf_jit_free(struct bpf_prog *prog)
|
||||
{
|
||||
if (prog->jited) {
|
||||
struct x64_jit_data *jit_data = prog->aux->jit_data;
|
||||
struct bpf_binary_header *hdr;
|
||||
|
||||
/*
|
||||
* If we fail the final pass of JIT (from jit_subprogs),
|
||||
* the program may not be finalized yet. Call finalize here
|
||||
* before freeing it.
|
||||
*/
|
||||
if (jit_data) {
|
||||
bpf_jit_binary_pack_finalize(prog, jit_data->header,
|
||||
jit_data->rw_header);
|
||||
kvfree(jit_data->addrs);
|
||||
kfree(jit_data);
|
||||
}
|
||||
hdr = bpf_jit_binary_pack_hdr(prog);
|
||||
bpf_jit_binary_pack_free(hdr, NULL);
|
||||
WARN_ON_ONCE(!bpf_prog_kallsyms_verify_off(prog));
|
||||
}
|
||||
|
||||
bpf_prog_unlock_free(prog);
|
||||
}
|
||||
|
||||
@@ -47,6 +47,7 @@ struct kobject;
|
||||
struct mem_cgroup;
|
||||
struct module;
|
||||
struct bpf_func_state;
|
||||
struct ftrace_ops;
|
||||
|
||||
extern struct idr btf_idr;
|
||||
extern spinlock_t btf_idr_lock;
|
||||
@@ -221,7 +222,7 @@ struct bpf_map {
|
||||
u32 btf_vmlinux_value_type_id;
|
||||
struct btf *btf;
|
||||
#ifdef CONFIG_MEMCG_KMEM
|
||||
struct mem_cgroup *memcg;
|
||||
struct obj_cgroup *objcg;
|
||||
#endif
|
||||
char name[BPF_OBJ_NAME_LEN];
|
||||
struct bpf_map_off_arr *off_arr;
|
||||
@@ -751,6 +752,16 @@ struct btf_func_model {
|
||||
/* Return the return value of fentry prog. Only used by bpf_struct_ops. */
|
||||
#define BPF_TRAMP_F_RET_FENTRY_RET BIT(4)
|
||||
|
||||
/* Get original function from stack instead of from provided direct address.
|
||||
* Makes sense for trampolines with fexit or fmod_ret programs.
|
||||
*/
|
||||
#define BPF_TRAMP_F_ORIG_STACK BIT(5)
|
||||
|
||||
/* This trampoline is on a function with another ftrace_ops with IPMODIFY,
|
||||
* e.g., a live patch. This flag is set and cleared by ftrace call backs,
|
||||
*/
|
||||
#define BPF_TRAMP_F_SHARE_IPMODIFY BIT(6)
|
||||
|
||||
/* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
|
||||
* bytes on x86.
|
||||
*/
|
||||
@@ -833,9 +844,11 @@ struct bpf_tramp_image {
|
||||
struct bpf_trampoline {
|
||||
/* hlist for trampoline_table */
|
||||
struct hlist_node hlist;
|
||||
struct ftrace_ops *fops;
|
||||
/* serializes access to fields of this trampoline */
|
||||
struct mutex mutex;
|
||||
refcount_t refcnt;
|
||||
u32 flags;
|
||||
u64 key;
|
||||
struct {
|
||||
struct btf_func_model model;
|
||||
@@ -1044,7 +1057,6 @@ struct bpf_prog_aux {
|
||||
bool sleepable;
|
||||
bool tail_call_reachable;
|
||||
bool xdp_has_frags;
|
||||
bool use_bpf_prog_pack;
|
||||
/* BTF_KIND_FUNC_PROTO for valid attach_btf_id */
|
||||
const struct btf_type *attach_func_proto;
|
||||
/* function name for valid attach_btf_id */
|
||||
@@ -1255,9 +1267,6 @@ struct bpf_dummy_ops {
|
||||
int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
|
||||
union bpf_attr __user *uattr);
|
||||
#endif
|
||||
int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
|
||||
int cgroup_atype);
|
||||
void bpf_trampoline_unlink_cgroup_shim(struct bpf_prog *prog);
|
||||
#else
|
||||
static inline const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id)
|
||||
{
|
||||
@@ -1281,6 +1290,13 @@ static inline int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map,
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
#endif
|
||||
|
||||
#if defined(CONFIG_CGROUP_BPF) && defined(CONFIG_BPF_LSM)
|
||||
int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
|
||||
int cgroup_atype);
|
||||
void bpf_trampoline_unlink_cgroup_shim(struct bpf_prog *prog);
|
||||
#else
|
||||
static inline int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
|
||||
int cgroup_atype)
|
||||
{
|
||||
@@ -1921,7 +1937,8 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog,
|
||||
struct bpf_reg_state *regs);
|
||||
int btf_check_kfunc_arg_match(struct bpf_verifier_env *env,
|
||||
const struct btf *btf, u32 func_id,
|
||||
struct bpf_reg_state *regs);
|
||||
struct bpf_reg_state *regs,
|
||||
u32 kfunc_flags);
|
||||
int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog,
|
||||
struct bpf_reg_state *reg);
|
||||
int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *prog,
|
||||
|
||||
@@ -345,10 +345,10 @@ struct bpf_verifier_state_list {
|
||||
};
|
||||
|
||||
struct bpf_loop_inline_state {
|
||||
int initialized:1; /* set to true upon first entry */
|
||||
int fit_for_inline:1; /* true if callback function is the same
|
||||
* at each call and flags are always zero
|
||||
*/
|
||||
unsigned int initialized:1; /* set to true upon first entry */
|
||||
unsigned int fit_for_inline:1; /* true if callback function is the same
|
||||
* at each call and flags are always zero
|
||||
*/
|
||||
u32 callback_subprogno; /* valid when fit_for_inline is true */
|
||||
};
|
||||
|
||||
|
||||
@@ -12,14 +12,43 @@
|
||||
#define BTF_TYPE_EMIT(type) ((void)(type *)0)
|
||||
#define BTF_TYPE_EMIT_ENUM(enum_val) ((void)enum_val)
|
||||
|
||||
enum btf_kfunc_type {
|
||||
BTF_KFUNC_TYPE_CHECK,
|
||||
BTF_KFUNC_TYPE_ACQUIRE,
|
||||
BTF_KFUNC_TYPE_RELEASE,
|
||||
BTF_KFUNC_TYPE_RET_NULL,
|
||||
BTF_KFUNC_TYPE_KPTR_ACQUIRE,
|
||||
BTF_KFUNC_TYPE_MAX,
|
||||
};
|
||||
/* These need to be macros, as the expressions are used in assembler input */
|
||||
#define KF_ACQUIRE (1 << 0) /* kfunc is an acquire function */
|
||||
#define KF_RELEASE (1 << 1) /* kfunc is a release function */
|
||||
#define KF_RET_NULL (1 << 2) /* kfunc returns a pointer that may be NULL */
|
||||
#define KF_KPTR_GET (1 << 3) /* kfunc returns reference to a kptr */
|
||||
/* Trusted arguments are those which are meant to be referenced arguments with
|
||||
* unchanged offset. It is used to enforce that pointers obtained from acquire
|
||||
* kfuncs remain unmodified when being passed to helpers taking trusted args.
|
||||
*
|
||||
* Consider
|
||||
* struct foo {
|
||||
* int data;
|
||||
* struct foo *next;
|
||||
* };
|
||||
*
|
||||
* struct bar {
|
||||
* int data;
|
||||
* struct foo f;
|
||||
* };
|
||||
*
|
||||
* struct foo *f = alloc_foo(); // Acquire kfunc
|
||||
* struct bar *b = alloc_bar(); // Acquire kfunc
|
||||
*
|
||||
* If a kfunc set_foo_data() wants to operate only on the allocated object, it
|
||||
* will set the KF_TRUSTED_ARGS flag, which will prevent unsafe usage like:
|
||||
*
|
||||
* set_foo_data(f, 42); // Allowed
|
||||
* set_foo_data(f->next, 42); // Rejected, non-referenced pointer
|
||||
* set_foo_data(&f->next, 42);// Rejected, referenced, but wrong type
|
||||
* set_foo_data(&b->f, 42); // Rejected, referenced, but bad offset
|
||||
*
|
||||
* In the final case, usually for the purposes of type matching, it is deduced
|
||||
* by looking at the type of the member at the offset, but due to the
|
||||
* requirement of trusted argument, this deduction will be strict and not done
|
||||
* for this case.
|
||||
*/
|
||||
#define KF_TRUSTED_ARGS (1 << 4) /* kfunc only takes trusted pointer arguments */
|
||||
|
||||
struct btf;
|
||||
struct btf_member;
|
||||
@@ -30,16 +59,7 @@ struct btf_id_set;
|
||||
|
||||
struct btf_kfunc_id_set {
|
||||
struct module *owner;
|
||||
union {
|
||||
struct {
|
||||
struct btf_id_set *check_set;
|
||||
struct btf_id_set *acquire_set;
|
||||
struct btf_id_set *release_set;
|
||||
struct btf_id_set *ret_null_set;
|
||||
struct btf_id_set *kptr_acquire_set;
|
||||
};
|
||||
struct btf_id_set *sets[BTF_KFUNC_TYPE_MAX];
|
||||
};
|
||||
struct btf_id_set8 *set;
|
||||
};
|
||||
|
||||
struct btf_id_dtor_kfunc {
|
||||
@@ -378,9 +398,9 @@ const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id);
|
||||
const char *btf_name_by_offset(const struct btf *btf, u32 offset);
|
||||
struct btf *btf_parse_vmlinux(void);
|
||||
struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog);
|
||||
bool btf_kfunc_id_set_contains(const struct btf *btf,
|
||||
u32 *btf_kfunc_id_set_contains(const struct btf *btf,
|
||||
enum bpf_prog_type prog_type,
|
||||
enum btf_kfunc_type type, u32 kfunc_btf_id);
|
||||
u32 kfunc_btf_id);
|
||||
int register_btf_kfunc_id_set(enum bpf_prog_type prog_type,
|
||||
const struct btf_kfunc_id_set *s);
|
||||
s32 btf_find_dtor_kfunc(struct btf *btf, u32 btf_id);
|
||||
@@ -397,12 +417,11 @@ static inline const char *btf_name_by_offset(const struct btf *btf,
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
static inline bool btf_kfunc_id_set_contains(const struct btf *btf,
|
||||
static inline u32 *btf_kfunc_id_set_contains(const struct btf *btf,
|
||||
enum bpf_prog_type prog_type,
|
||||
enum btf_kfunc_type type,
|
||||
u32 kfunc_btf_id)
|
||||
{
|
||||
return false;
|
||||
return NULL;
|
||||
}
|
||||
static inline int register_btf_kfunc_id_set(enum bpf_prog_type prog_type,
|
||||
const struct btf_kfunc_id_set *s)
|
||||
|
||||
@@ -8,6 +8,15 @@ struct btf_id_set {
|
||||
u32 ids[];
|
||||
};
|
||||
|
||||
struct btf_id_set8 {
|
||||
u32 cnt;
|
||||
u32 flags;
|
||||
struct {
|
||||
u32 id;
|
||||
u32 flags;
|
||||
} pairs[];
|
||||
};
|
||||
|
||||
#ifdef CONFIG_DEBUG_INFO_BTF
|
||||
|
||||
#include <linux/compiler.h> /* for __PASTE */
|
||||
@@ -25,7 +34,7 @@ struct btf_id_set {
|
||||
|
||||
#define BTF_IDS_SECTION ".BTF_ids"
|
||||
|
||||
#define ____BTF_ID(symbol) \
|
||||
#define ____BTF_ID(symbol, word) \
|
||||
asm( \
|
||||
".pushsection " BTF_IDS_SECTION ",\"a\"; \n" \
|
||||
".local " #symbol " ; \n" \
|
||||
@@ -33,10 +42,11 @@ asm( \
|
||||
".size " #symbol ", 4; \n" \
|
||||
#symbol ": \n" \
|
||||
".zero 4 \n" \
|
||||
word \
|
||||
".popsection; \n");
|
||||
|
||||
#define __BTF_ID(symbol) \
|
||||
____BTF_ID(symbol)
|
||||
#define __BTF_ID(symbol, word) \
|
||||
____BTF_ID(symbol, word)
|
||||
|
||||
#define __ID(prefix) \
|
||||
__PASTE(prefix, __COUNTER__)
|
||||
@@ -46,7 +56,14 @@ asm( \
|
||||
* to 4 zero bytes.
|
||||
*/
|
||||
#define BTF_ID(prefix, name) \
|
||||
__BTF_ID(__ID(__BTF_ID__##prefix##__##name##__))
|
||||
__BTF_ID(__ID(__BTF_ID__##prefix##__##name##__), "")
|
||||
|
||||
#define ____BTF_ID_FLAGS(prefix, name, flags) \
|
||||
__BTF_ID(__ID(__BTF_ID__##prefix##__##name##__), ".long " #flags "\n")
|
||||
#define __BTF_ID_FLAGS(prefix, name, flags, ...) \
|
||||
____BTF_ID_FLAGS(prefix, name, flags)
|
||||
#define BTF_ID_FLAGS(prefix, name, ...) \
|
||||
__BTF_ID_FLAGS(prefix, name, ##__VA_ARGS__, 0)
|
||||
|
||||
/*
|
||||
* The BTF_ID_LIST macro defines pure (unsorted) list
|
||||
@@ -145,10 +162,51 @@ asm( \
|
||||
".popsection; \n"); \
|
||||
extern struct btf_id_set name;
|
||||
|
||||
/*
|
||||
* The BTF_SET8_START/END macros pair defines sorted list of
|
||||
* BTF IDs and their flags plus its members count, with the
|
||||
* following layout:
|
||||
*
|
||||
* BTF_SET8_START(list)
|
||||
* BTF_ID_FLAGS(type1, name1, flags)
|
||||
* BTF_ID_FLAGS(type2, name2, flags)
|
||||
* BTF_SET8_END(list)
|
||||
*
|
||||
* __BTF_ID__set8__list:
|
||||
* .zero 8
|
||||
* list:
|
||||
* __BTF_ID__type1__name1__3:
|
||||
* .zero 4
|
||||
* .word (1 << 0) | (1 << 2)
|
||||
* __BTF_ID__type2__name2__5:
|
||||
* .zero 4
|
||||
* .word (1 << 3) | (1 << 1) | (1 << 2)
|
||||
*
|
||||
*/
|
||||
#define __BTF_SET8_START(name, scope) \
|
||||
asm( \
|
||||
".pushsection " BTF_IDS_SECTION ",\"a\"; \n" \
|
||||
"." #scope " __BTF_ID__set8__" #name "; \n" \
|
||||
"__BTF_ID__set8__" #name ":; \n" \
|
||||
".zero 8 \n" \
|
||||
".popsection; \n");
|
||||
|
||||
#define BTF_SET8_START(name) \
|
||||
__BTF_ID_LIST(name, local) \
|
||||
__BTF_SET8_START(name, local)
|
||||
|
||||
#define BTF_SET8_END(name) \
|
||||
asm( \
|
||||
".pushsection " BTF_IDS_SECTION ",\"a\"; \n" \
|
||||
".size __BTF_ID__set8__" #name ", .-" #name " \n" \
|
||||
".popsection; \n"); \
|
||||
extern struct btf_id_set8 name;
|
||||
|
||||
#else
|
||||
|
||||
#define BTF_ID_LIST(name) static u32 __maybe_unused name[5];
|
||||
#define BTF_ID(prefix, name)
|
||||
#define BTF_ID_FLAGS(prefix, name, ...)
|
||||
#define BTF_ID_UNUSED
|
||||
#define BTF_ID_LIST_GLOBAL(name, n) u32 __maybe_unused name[n];
|
||||
#define BTF_ID_LIST_SINGLE(name, prefix, typename) static u32 __maybe_unused name[1];
|
||||
@@ -156,6 +214,8 @@ extern struct btf_id_set name;
|
||||
#define BTF_SET_START(name) static struct btf_id_set __maybe_unused name = { 0 };
|
||||
#define BTF_SET_START_GLOBAL(name) static struct btf_id_set __maybe_unused name = { 0 };
|
||||
#define BTF_SET_END(name)
|
||||
#define BTF_SET8_START(name) static struct btf_id_set8 __maybe_unused name = { 0 };
|
||||
#define BTF_SET8_END(name)
|
||||
|
||||
#endif /* CONFIG_DEBUG_INFO_BTF */
|
||||
|
||||
|
||||
@@ -1027,6 +1027,14 @@ u64 bpf_jit_alloc_exec_limit(void);
|
||||
void *bpf_jit_alloc_exec(unsigned long size);
|
||||
void bpf_jit_free_exec(void *addr);
|
||||
void bpf_jit_free(struct bpf_prog *fp);
|
||||
struct bpf_binary_header *
|
||||
bpf_jit_binary_pack_hdr(const struct bpf_prog *fp);
|
||||
|
||||
static inline bool bpf_prog_kallsyms_verify_off(const struct bpf_prog *fp)
|
||||
{
|
||||
return list_empty(&fp->aux->ksym.lnode) ||
|
||||
fp->aux->ksym.lnode.prev == LIST_POISON2;
|
||||
}
|
||||
|
||||
struct bpf_binary_header *
|
||||
bpf_jit_binary_pack_alloc(unsigned int proglen, u8 **ro_image,
|
||||
|
||||
@@ -208,6 +208,43 @@ enum {
|
||||
FTRACE_OPS_FL_DIRECT = BIT(17),
|
||||
};
|
||||
|
||||
/*
|
||||
* FTRACE_OPS_CMD_* commands allow the ftrace core logic to request changes
|
||||
* to a ftrace_ops. Note, the requests may fail.
|
||||
*
|
||||
* ENABLE_SHARE_IPMODIFY_SELF - enable a DIRECT ops to work on the same
|
||||
* function as an ops with IPMODIFY. Called
|
||||
* when the DIRECT ops is being registered.
|
||||
* This is called with both direct_mutex and
|
||||
* ftrace_lock are locked.
|
||||
*
|
||||
* ENABLE_SHARE_IPMODIFY_PEER - enable a DIRECT ops to work on the same
|
||||
* function as an ops with IPMODIFY. Called
|
||||
* when the other ops (the one with IPMODIFY)
|
||||
* is being registered.
|
||||
* This is called with direct_mutex locked.
|
||||
*
|
||||
* DISABLE_SHARE_IPMODIFY_PEER - disable a DIRECT ops to work on the same
|
||||
* function as an ops with IPMODIFY. Called
|
||||
* when the other ops (the one with IPMODIFY)
|
||||
* is being unregistered.
|
||||
* This is called with direct_mutex locked.
|
||||
*/
|
||||
enum ftrace_ops_cmd {
|
||||
FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF,
|
||||
FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER,
|
||||
FTRACE_OPS_CMD_DISABLE_SHARE_IPMODIFY_PEER,
|
||||
};
|
||||
|
||||
/*
|
||||
* For most ftrace_ops_cmd,
|
||||
* Returns:
|
||||
* 0 - Success.
|
||||
* Negative on failure. The return value is dependent on the
|
||||
* callback.
|
||||
*/
|
||||
typedef int (*ftrace_ops_func_t)(struct ftrace_ops *op, enum ftrace_ops_cmd cmd);
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
/* The hash used to know what functions callbacks trace */
|
||||
struct ftrace_ops_hash {
|
||||
@@ -250,6 +287,7 @@ struct ftrace_ops {
|
||||
unsigned long trampoline;
|
||||
unsigned long trampoline_size;
|
||||
struct list_head list;
|
||||
ftrace_ops_func_t ops_func;
|
||||
#endif
|
||||
};
|
||||
|
||||
@@ -340,6 +378,7 @@ unsigned long ftrace_find_rec_direct(unsigned long ip);
|
||||
int register_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr);
|
||||
int unregister_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr);
|
||||
int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr);
|
||||
int modify_ftrace_direct_multi_nolock(struct ftrace_ops *ops, unsigned long addr);
|
||||
|
||||
#else
|
||||
struct ftrace_ops;
|
||||
@@ -384,6 +423,10 @@ static inline int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned lo
|
||||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
static inline int modify_ftrace_direct_multi_nolock(struct ftrace_ops *ops, unsigned long addr)
|
||||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
|
||||
|
||||
#ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
|
||||
|
||||
@@ -2487,6 +2487,14 @@ static inline void skb_set_tail_pointer(struct sk_buff *skb, const int offset)
|
||||
|
||||
#endif /* NET_SKBUFF_DATA_USES_OFFSET */
|
||||
|
||||
static inline void skb_assert_len(struct sk_buff *skb)
|
||||
{
|
||||
#ifdef CONFIG_DEBUG_NET
|
||||
if (WARN_ONCE(!skb->len, "%s\n", __func__))
|
||||
DO_ONCE_LITE(skb_dump, KERN_ERR, skb, false);
|
||||
#endif /* CONFIG_DEBUG_NET */
|
||||
}
|
||||
|
||||
/*
|
||||
* Add data to an sk_buff
|
||||
*/
|
||||
|
||||
@@ -84,4 +84,23 @@ void nf_conntrack_lock(spinlock_t *lock);
|
||||
|
||||
extern spinlock_t nf_conntrack_expect_lock;
|
||||
|
||||
/* ctnetlink code shared by both ctnetlink and nf_conntrack_bpf */
|
||||
|
||||
#if (IS_BUILTIN(CONFIG_NF_CONNTRACK) && IS_ENABLED(CONFIG_DEBUG_INFO_BTF)) || \
|
||||
(IS_MODULE(CONFIG_NF_CONNTRACK) && IS_ENABLED(CONFIG_DEBUG_INFO_BTF_MODULES) || \
|
||||
IS_ENABLED(CONFIG_NF_CT_NETLINK))
|
||||
|
||||
static inline void __nf_ct_set_timeout(struct nf_conn *ct, u64 timeout)
|
||||
{
|
||||
if (timeout > INT_MAX)
|
||||
timeout = INT_MAX;
|
||||
WRITE_ONCE(ct->timeout, nfct_time_stamp + (u32)timeout);
|
||||
}
|
||||
|
||||
int __nf_ct_change_timeout(struct nf_conn *ct, u64 cta_timeout);
|
||||
void __nf_ct_change_status(struct nf_conn *ct, unsigned long on, unsigned long off);
|
||||
int nf_ct_change_status_common(struct nf_conn *ct, unsigned int status);
|
||||
|
||||
#endif
|
||||
|
||||
#endif /* _NF_CONNTRACK_CORE_H */
|
||||
|
||||
@@ -44,6 +44,15 @@ static inline void xsk_pool_set_rxq_info(struct xsk_buff_pool *pool,
|
||||
xp_set_rxq_info(pool, rxq);
|
||||
}
|
||||
|
||||
static inline unsigned int xsk_pool_get_napi_id(struct xsk_buff_pool *pool)
|
||||
{
|
||||
#ifdef CONFIG_NET_RX_BUSY_POLL
|
||||
return pool->heads[0].xdp.rxq->napi_id;
|
||||
#else
|
||||
return 0;
|
||||
#endif
|
||||
}
|
||||
|
||||
static inline void xsk_pool_dma_unmap(struct xsk_buff_pool *pool,
|
||||
unsigned long attrs)
|
||||
{
|
||||
@@ -198,6 +207,11 @@ static inline void xsk_pool_set_rxq_info(struct xsk_buff_pool *pool,
|
||||
{
|
||||
}
|
||||
|
||||
static inline unsigned int xsk_pool_get_napi_id(struct xsk_buff_pool *pool)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void xsk_pool_dma_unmap(struct xsk_buff_pool *pool,
|
||||
unsigned long attrs)
|
||||
{
|
||||
|
||||
@@ -2361,7 +2361,8 @@ union bpf_attr {
|
||||
* Pull in non-linear data in case the *skb* is non-linear and not
|
||||
* all of *len* are part of the linear section. Make *len* bytes
|
||||
* from *skb* readable and writable. If a zero value is passed for
|
||||
* *len*, then the whole length of the *skb* is pulled.
|
||||
* *len*, then all bytes in the linear part of *skb* will be made
|
||||
* readable and writable.
|
||||
*
|
||||
* This helper is only needed for reading and writing with direct
|
||||
* packet access.
|
||||
|
||||
@@ -70,10 +70,8 @@ int array_map_alloc_check(union bpf_attr *attr)
|
||||
attr->map_flags & BPF_F_PRESERVE_ELEMS)
|
||||
return -EINVAL;
|
||||
|
||||
if (attr->value_size > KMALLOC_MAX_SIZE)
|
||||
/* if value_size is bigger, the user space won't be able to
|
||||
* access the elements.
|
||||
*/
|
||||
/* avoid overflow on round_up(map->value_size) */
|
||||
if (attr->value_size > INT_MAX)
|
||||
return -E2BIG;
|
||||
|
||||
return 0;
|
||||
@@ -156,6 +154,11 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr)
|
||||
return &array->map;
|
||||
}
|
||||
|
||||
static void *array_map_elem_ptr(struct bpf_array* array, u32 index)
|
||||
{
|
||||
return array->value + (u64)array->elem_size * index;
|
||||
}
|
||||
|
||||
/* Called from syscall or from eBPF program */
|
||||
static void *array_map_lookup_elem(struct bpf_map *map, void *key)
|
||||
{
|
||||
@@ -165,7 +168,7 @@ static void *array_map_lookup_elem(struct bpf_map *map, void *key)
|
||||
if (unlikely(index >= array->map.max_entries))
|
||||
return NULL;
|
||||
|
||||
return array->value + array->elem_size * (index & array->index_mask);
|
||||
return array->value + (u64)array->elem_size * (index & array->index_mask);
|
||||
}
|
||||
|
||||
static int array_map_direct_value_addr(const struct bpf_map *map, u64 *imm,
|
||||
@@ -203,7 +206,7 @@ static int array_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
|
||||
{
|
||||
struct bpf_array *array = container_of(map, struct bpf_array, map);
|
||||
struct bpf_insn *insn = insn_buf;
|
||||
u32 elem_size = round_up(map->value_size, 8);
|
||||
u32 elem_size = array->elem_size;
|
||||
const int ret = BPF_REG_0;
|
||||
const int map_ptr = BPF_REG_1;
|
||||
const int index = BPF_REG_2;
|
||||
@@ -272,7 +275,7 @@ int bpf_percpu_array_copy(struct bpf_map *map, void *key, void *value)
|
||||
* access 'value_size' of them, so copying rounded areas
|
||||
* will not leak any kernel data
|
||||
*/
|
||||
size = round_up(map->value_size, 8);
|
||||
size = array->elem_size;
|
||||
rcu_read_lock();
|
||||
pptr = array->pptrs[index & array->index_mask];
|
||||
for_each_possible_cpu(cpu) {
|
||||
@@ -339,7 +342,7 @@ static int array_map_update_elem(struct bpf_map *map, void *key, void *value,
|
||||
value, map->value_size);
|
||||
} else {
|
||||
val = array->value +
|
||||
array->elem_size * (index & array->index_mask);
|
||||
(u64)array->elem_size * (index & array->index_mask);
|
||||
if (map_flags & BPF_F_LOCK)
|
||||
copy_map_value_locked(map, val, value, false);
|
||||
else
|
||||
@@ -376,7 +379,7 @@ int bpf_percpu_array_update(struct bpf_map *map, void *key, void *value,
|
||||
* returned or zeros which were zero-filled by percpu_alloc,
|
||||
* so no kernel data leaks possible
|
||||
*/
|
||||
size = round_up(map->value_size, 8);
|
||||
size = array->elem_size;
|
||||
rcu_read_lock();
|
||||
pptr = array->pptrs[index & array->index_mask];
|
||||
for_each_possible_cpu(cpu) {
|
||||
@@ -408,8 +411,7 @@ static void array_map_free_timers(struct bpf_map *map)
|
||||
return;
|
||||
|
||||
for (i = 0; i < array->map.max_entries; i++)
|
||||
bpf_timer_cancel_and_free(array->value + array->elem_size * i +
|
||||
map->timer_off);
|
||||
bpf_timer_cancel_and_free(array_map_elem_ptr(array, i) + map->timer_off);
|
||||
}
|
||||
|
||||
/* Called when map->refcnt goes to zero, either from workqueue or from syscall */
|
||||
@@ -420,7 +422,7 @@ static void array_map_free(struct bpf_map *map)
|
||||
|
||||
if (map_value_has_kptrs(map)) {
|
||||
for (i = 0; i < array->map.max_entries; i++)
|
||||
bpf_map_free_kptrs(map, array->value + array->elem_size * i);
|
||||
bpf_map_free_kptrs(map, array_map_elem_ptr(array, i));
|
||||
bpf_map_free_kptr_off_tab(map);
|
||||
}
|
||||
|
||||
@@ -556,7 +558,7 @@ static void *bpf_array_map_seq_start(struct seq_file *seq, loff_t *pos)
|
||||
index = info->index & array->index_mask;
|
||||
if (info->percpu_value_buf)
|
||||
return array->pptrs[index];
|
||||
return array->value + array->elem_size * index;
|
||||
return array_map_elem_ptr(array, index);
|
||||
}
|
||||
|
||||
static void *bpf_array_map_seq_next(struct seq_file *seq, void *v, loff_t *pos)
|
||||
@@ -575,7 +577,7 @@ static void *bpf_array_map_seq_next(struct seq_file *seq, void *v, loff_t *pos)
|
||||
index = info->index & array->index_mask;
|
||||
if (info->percpu_value_buf)
|
||||
return array->pptrs[index];
|
||||
return array->value + array->elem_size * index;
|
||||
return array_map_elem_ptr(array, index);
|
||||
}
|
||||
|
||||
static int __bpf_array_map_seq_show(struct seq_file *seq, void *v)
|
||||
@@ -583,6 +585,7 @@ static int __bpf_array_map_seq_show(struct seq_file *seq, void *v)
|
||||
struct bpf_iter_seq_array_map_info *info = seq->private;
|
||||
struct bpf_iter__bpf_map_elem ctx = {};
|
||||
struct bpf_map *map = info->map;
|
||||
struct bpf_array *array = container_of(map, struct bpf_array, map);
|
||||
struct bpf_iter_meta meta;
|
||||
struct bpf_prog *prog;
|
||||
int off = 0, cpu = 0;
|
||||
@@ -603,7 +606,7 @@ static int __bpf_array_map_seq_show(struct seq_file *seq, void *v)
|
||||
ctx.value = v;
|
||||
} else {
|
||||
pptr = v;
|
||||
size = round_up(map->value_size, 8);
|
||||
size = array->elem_size;
|
||||
for_each_possible_cpu(cpu) {
|
||||
bpf_long_memcpy(info->percpu_value_buf + off,
|
||||
per_cpu_ptr(pptr, cpu),
|
||||
@@ -633,11 +636,12 @@ static int bpf_iter_init_array_map(void *priv_data,
|
||||
{
|
||||
struct bpf_iter_seq_array_map_info *seq_info = priv_data;
|
||||
struct bpf_map *map = aux->map;
|
||||
struct bpf_array *array = container_of(map, struct bpf_array, map);
|
||||
void *value_buf;
|
||||
u32 buf_size;
|
||||
|
||||
if (map->map_type == BPF_MAP_TYPE_PERCPU_ARRAY) {
|
||||
buf_size = round_up(map->value_size, 8) * num_possible_cpus();
|
||||
buf_size = array->elem_size * num_possible_cpus();
|
||||
value_buf = kmalloc(buf_size, GFP_USER | __GFP_NOWARN);
|
||||
if (!value_buf)
|
||||
return -ENOMEM;
|
||||
@@ -690,7 +694,7 @@ static int bpf_for_each_array_elem(struct bpf_map *map, bpf_callback_t callback_
|
||||
if (is_percpu)
|
||||
val = this_cpu_ptr(array->pptrs[i]);
|
||||
else
|
||||
val = array->value + array->elem_size * i;
|
||||
val = array_map_elem_ptr(array, i);
|
||||
num_elems++;
|
||||
key = i;
|
||||
ret = callback_fn((u64)(long)map, (u64)(long)&key,
|
||||
@@ -1322,7 +1326,7 @@ static int array_of_map_gen_lookup(struct bpf_map *map,
|
||||
struct bpf_insn *insn_buf)
|
||||
{
|
||||
struct bpf_array *array = container_of(map, struct bpf_array, map);
|
||||
u32 elem_size = round_up(map->value_size, 8);
|
||||
u32 elem_size = array->elem_size;
|
||||
struct bpf_insn *insn = insn_buf;
|
||||
const int ret = BPF_REG_0;
|
||||
const int map_ptr = BPF_REG_1;
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user