You've already forked linux-rockchip
mirror of
https://github.com/armbian/linux-rockchip.git
synced 2026-01-06 11:08:10 -08:00
net: WireGuard secure network tunnel
WireGuard is a layer 3 secure networking tunnel made specifically for the kernel, that aims to be much simpler and easier to audit than IPsec. Extensive documentation and description of the protocol and considerations, along with formal proofs of the cryptography, are available at: * https://www.wireguard.com/ * https://www.wireguard.com/papers/wireguard.pdf This commit implements WireGuard as a simple network device driver, accessible in the usual RTNL way used by virtual network drivers. It makes use of the udp_tunnel APIs, GRO, GSO, NAPI, and the usual set of networking subsystem APIs. It has a somewhat novel multicore queueing system designed for maximum throughput and minimal latency of encryption operations, but it is implemented modestly using workqueues and NAPI. Configuration is done via generic Netlink, and following a review from the Netlink maintainer a year ago, several high profile userspace tools have already implemented the API. This commit also comes with several different tests, both in-kernel tests and out-of-kernel tests based on network namespaces, taking profit of the fact that sockets used by WireGuard intentionally stay in the namespace the WireGuard interface was originally created, exactly like the semantics of userspace tun devices. See wireguard.com/netns/ for pictures and examples. The source code is fairly short, but rather than combining everything into a single file, WireGuard is developed as cleanly separable files, making auditing and comprehension easier. Things are laid out as follows: * noise.[ch], cookie.[ch], messages.h: These implement the bulk of the cryptographic aspects of the protocol, and are mostly data-only in nature, taking in buffers of bytes and spitting out buffers of bytes. They also handle reference counting for their various shared pieces of data, like keys and key lists. * ratelimiter.[ch]: Used as an integral part of cookie.[ch] for ratelimiting certain types of cryptographic operations in accordance with particular WireGuard semantics. * allowedips.[ch], peerlookup.[ch]: The main lookup structures of WireGuard, the former being trie-like with particular semantics, an integral part of the design of the protocol, and the latter just being nice helper functions around the various hashtables we use. * device.[ch]: Implementation of functions for the netdevice and for rtnl, responsible for maintaining the life of a given interface and wiring it up to the rest of WireGuard. * peer.[ch]: Each interface has a list of peers, with helper functions available here for creation, destruction, and reference counting. * socket.[ch]: Implementation of functions related to udp_socket and the general set of kernel socket APIs, for sending and receiving ciphertext UDP packets, and taking care of WireGuard-specific sticky socket routing semantics for the automatic roaming. * netlink.[ch]: Userspace API entry point for configuring WireGuard peers and devices. The API has been implemented by several userspace tools and network management utility, and the WireGuard project distributes the basic wg(8) tool. * queueing.[ch]: Shared function on the rx and tx path for handling the various queues used in the multicore algorithms. * send.c: Handles encrypting outgoing packets in parallel on multiple cores, before sending them in order on a single core, via workqueues and ring buffers. Also handles sending handshake and cookie messages as part of the protocol, in parallel. * receive.c: Handles decrypting incoming packets in parallel on multiple cores, before passing them off in order to be ingested via the rest of the networking subsystem with GRO via the typical NAPI poll function. Also handles receiving handshake and cookie messages as part of the protocol, in parallel. * timers.[ch]: Uses the timer wheel to implement protocol particular event timeouts, and gives a set of very simple event-driven entry point functions for callers. * main.c, version.h: Initialization and deinitialization of the module. * selftest/*.h: Runtime unit tests for some of the most security sensitive functions. * tools/testing/selftests/wireguard/netns.sh: Aforementioned testing script using network namespaces. This commit aims to be as self-contained as possible, implementing WireGuard as a standalone module not needing much special handling or coordination from the network subsystem. I expect for future optimizations to the network stack to positively improve WireGuard, and vice-versa, but for the time being, this exists as intentionally standalone. We introduce a menu option for CONFIG_WIREGUARD, as well as providing a verbose debug log and self-tests via CONFIG_WIREGUARD_DEBUG. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: David Miller <davem@davemloft.net> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: netdev@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
committed by
David S. Miller
parent
e42617b825
commit
e7096c131e
@@ -17850,6 +17850,14 @@ L: linux-gpio@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/gpio/gpio-ws16c48.c
|
||||
|
||||
WIREGUARD SECURE NETWORK TUNNEL
|
||||
M: Jason A. Donenfeld <Jason@zx2c4.com>
|
||||
S: Maintained
|
||||
F: drivers/net/wireguard/
|
||||
F: tools/testing/selftests/wireguard/
|
||||
L: wireguard@lists.zx2c4.com
|
||||
L: netdev@vger.kernel.org
|
||||
|
||||
WISTRON LAPTOP BUTTON DRIVER
|
||||
M: Miloslav Trmac <mitr@volny.cz>
|
||||
S: Maintained
|
||||
|
||||
@@ -71,6 +71,47 @@ config DUMMY
|
||||
To compile this driver as a module, choose M here: the module
|
||||
will be called dummy.
|
||||
|
||||
config WIREGUARD
|
||||
tristate "WireGuard secure network tunnel"
|
||||
depends on NET && INET
|
||||
depends on IPV6 || !IPV6
|
||||
select NET_UDP_TUNNEL
|
||||
select DST_CACHE
|
||||
select CRYPTO
|
||||
select CRYPTO_LIB_CURVE25519
|
||||
select CRYPTO_LIB_CHACHA20POLY1305
|
||||
select CRYPTO_LIB_BLAKE2S
|
||||
select CRYPTO_CHACHA20_X86_64 if X86 && 64BIT
|
||||
select CRYPTO_POLY1305_X86_64 if X86 && 64BIT
|
||||
select CRYPTO_BLAKE2S_X86 if X86 && 64BIT
|
||||
select CRYPTO_CURVE25519_X86 if X86 && 64BIT
|
||||
select CRYPTO_CHACHA20_NEON if (ARM || ARM64) && KERNEL_MODE_NEON
|
||||
select CRYPTO_POLY1305_NEON if ARM64 && KERNEL_MODE_NEON
|
||||
select CRYPTO_POLY1305_ARM if ARM
|
||||
select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON
|
||||
select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R2
|
||||
select CRYPTO_POLY1305_MIPS if CPU_MIPS32 || (CPU_MIPS64 && 64BIT)
|
||||
help
|
||||
WireGuard is a secure, fast, and easy to use replacement for IPSec
|
||||
that uses modern cryptography and clever networking tricks. It's
|
||||
designed to be fairly general purpose and abstract enough to fit most
|
||||
use cases, while at the same time remaining extremely simple to
|
||||
configure. See www.wireguard.com for more info.
|
||||
|
||||
It's safe to say Y or M here, as the driver is very lightweight and
|
||||
is only in use when an administrator chooses to add an interface.
|
||||
|
||||
config WIREGUARD_DEBUG
|
||||
bool "Debugging checks and verbose messages"
|
||||
depends on WIREGUARD
|
||||
help
|
||||
This will write log messages for handshake and other events
|
||||
that occur for a WireGuard interface. It will also perform some
|
||||
extra validation checks and unit tests at various points. This is
|
||||
only useful for debugging.
|
||||
|
||||
Say N here unless you know what you're doing.
|
||||
|
||||
config EQUALIZER
|
||||
tristate "EQL (serial line load balancing) support"
|
||||
---help---
|
||||
|
||||
@@ -10,6 +10,7 @@ obj-$(CONFIG_BONDING) += bonding/
|
||||
obj-$(CONFIG_IPVLAN) += ipvlan/
|
||||
obj-$(CONFIG_IPVTAP) += ipvlan/
|
||||
obj-$(CONFIG_DUMMY) += dummy.o
|
||||
obj-$(CONFIG_WIREGUARD) += wireguard/
|
||||
obj-$(CONFIG_EQUALIZER) += eql.o
|
||||
obj-$(CONFIG_IFB) += ifb.o
|
||||
obj-$(CONFIG_MACSEC) += macsec.o
|
||||
|
||||
18
drivers/net/wireguard/Makefile
Normal file
18
drivers/net/wireguard/Makefile
Normal file
@@ -0,0 +1,18 @@
|
||||
ccflags-y := -O3
|
||||
ccflags-y += -D'pr_fmt(fmt)=KBUILD_MODNAME ": " fmt'
|
||||
ccflags-$(CONFIG_WIREGUARD_DEBUG) += -DDEBUG
|
||||
wireguard-y := main.o
|
||||
wireguard-y += noise.o
|
||||
wireguard-y += device.o
|
||||
wireguard-y += peer.o
|
||||
wireguard-y += timers.o
|
||||
wireguard-y += queueing.o
|
||||
wireguard-y += send.o
|
||||
wireguard-y += receive.o
|
||||
wireguard-y += socket.o
|
||||
wireguard-y += peerlookup.o
|
||||
wireguard-y += allowedips.o
|
||||
wireguard-y += ratelimiter.o
|
||||
wireguard-y += cookie.o
|
||||
wireguard-y += netlink.o
|
||||
obj-$(CONFIG_WIREGUARD) := wireguard.o
|
||||
381
drivers/net/wireguard/allowedips.c
Normal file
381
drivers/net/wireguard/allowedips.c
Normal file
@@ -0,0 +1,381 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
||||
*/
|
||||
|
||||
#include "allowedips.h"
|
||||
#include "peer.h"
|
||||
|
||||
static void swap_endian(u8 *dst, const u8 *src, u8 bits)
|
||||
{
|
||||
if (bits == 32) {
|
||||
*(u32 *)dst = be32_to_cpu(*(const __be32 *)src);
|
||||
} else if (bits == 128) {
|
||||
((u64 *)dst)[0] = be64_to_cpu(((const __be64 *)src)[0]);
|
||||
((u64 *)dst)[1] = be64_to_cpu(((const __be64 *)src)[1]);
|
||||
}
|
||||
}
|
||||
|
||||
static void copy_and_assign_cidr(struct allowedips_node *node, const u8 *src,
|
||||
u8 cidr, u8 bits)
|
||||
{
|
||||
node->cidr = cidr;
|
||||
node->bit_at_a = cidr / 8U;
|
||||
#ifdef __LITTLE_ENDIAN
|
||||
node->bit_at_a ^= (bits / 8U - 1U) % 8U;
|
||||
#endif
|
||||
node->bit_at_b = 7U - (cidr % 8U);
|
||||
node->bitlen = bits;
|
||||
memcpy(node->bits, src, bits / 8U);
|
||||
}
|
||||
#define CHOOSE_NODE(parent, key) \
|
||||
parent->bit[(key[parent->bit_at_a] >> parent->bit_at_b) & 1]
|
||||
|
||||
static void node_free_rcu(struct rcu_head *rcu)
|
||||
{
|
||||
kfree(container_of(rcu, struct allowedips_node, rcu));
|
||||
}
|
||||
|
||||
static void push_rcu(struct allowedips_node **stack,
|
||||
struct allowedips_node __rcu *p, unsigned int *len)
|
||||
{
|
||||
if (rcu_access_pointer(p)) {
|
||||
WARN_ON(IS_ENABLED(DEBUG) && *len >= 128);
|
||||
stack[(*len)++] = rcu_dereference_raw(p);
|
||||
}
|
||||
}
|
||||
|
||||
static void root_free_rcu(struct rcu_head *rcu)
|
||||
{
|
||||
struct allowedips_node *node, *stack[128] = {
|
||||
container_of(rcu, struct allowedips_node, rcu) };
|
||||
unsigned int len = 1;
|
||||
|
||||
while (len > 0 && (node = stack[--len])) {
|
||||
push_rcu(stack, node->bit[0], &len);
|
||||
push_rcu(stack, node->bit[1], &len);
|
||||
kfree(node);
|
||||
}
|
||||
}
|
||||
|
||||
static void root_remove_peer_lists(struct allowedips_node *root)
|
||||
{
|
||||
struct allowedips_node *node, *stack[128] = { root };
|
||||
unsigned int len = 1;
|
||||
|
||||
while (len > 0 && (node = stack[--len])) {
|
||||
push_rcu(stack, node->bit[0], &len);
|
||||
push_rcu(stack, node->bit[1], &len);
|
||||
if (rcu_access_pointer(node->peer))
|
||||
list_del(&node->peer_list);
|
||||
}
|
||||
}
|
||||
|
||||
static void walk_remove_by_peer(struct allowedips_node __rcu **top,
|
||||
struct wg_peer *peer, struct mutex *lock)
|
||||
{
|
||||
#define REF(p) rcu_access_pointer(p)
|
||||
#define DEREF(p) rcu_dereference_protected(*(p), lockdep_is_held(lock))
|
||||
#define PUSH(p) ({ \
|
||||
WARN_ON(IS_ENABLED(DEBUG) && len >= 128); \
|
||||
stack[len++] = p; \
|
||||
})
|
||||
|
||||
struct allowedips_node __rcu **stack[128], **nptr;
|
||||
struct allowedips_node *node, *prev;
|
||||
unsigned int len;
|
||||
|
||||
if (unlikely(!peer || !REF(*top)))
|
||||
return;
|
||||
|
||||
for (prev = NULL, len = 0, PUSH(top); len > 0; prev = node) {
|
||||
nptr = stack[len - 1];
|
||||
node = DEREF(nptr);
|
||||
if (!node) {
|
||||
--len;
|
||||
continue;
|
||||
}
|
||||
if (!prev || REF(prev->bit[0]) == node ||
|
||||
REF(prev->bit[1]) == node) {
|
||||
if (REF(node->bit[0]))
|
||||
PUSH(&node->bit[0]);
|
||||
else if (REF(node->bit[1]))
|
||||
PUSH(&node->bit[1]);
|
||||
} else if (REF(node->bit[0]) == prev) {
|
||||
if (REF(node->bit[1]))
|
||||
PUSH(&node->bit[1]);
|
||||
} else {
|
||||
if (rcu_dereference_protected(node->peer,
|
||||
lockdep_is_held(lock)) == peer) {
|
||||
RCU_INIT_POINTER(node->peer, NULL);
|
||||
list_del_init(&node->peer_list);
|
||||
if (!node->bit[0] || !node->bit[1]) {
|
||||
rcu_assign_pointer(*nptr, DEREF(
|
||||
&node->bit[!REF(node->bit[0])]));
|
||||
call_rcu(&node->rcu, node_free_rcu);
|
||||
node = DEREF(nptr);
|
||||
}
|
||||
}
|
||||
--len;
|
||||
}
|
||||
}
|
||||
|
||||
#undef REF
|
||||
#undef DEREF
|
||||
#undef PUSH
|
||||
}
|
||||
|
||||
static unsigned int fls128(u64 a, u64 b)
|
||||
{
|
||||
return a ? fls64(a) + 64U : fls64(b);
|
||||
}
|
||||
|
||||
static u8 common_bits(const struct allowedips_node *node, const u8 *key,
|
||||
u8 bits)
|
||||
{
|
||||
if (bits == 32)
|
||||
return 32U - fls(*(const u32 *)node->bits ^ *(const u32 *)key);
|
||||
else if (bits == 128)
|
||||
return 128U - fls128(
|
||||
*(const u64 *)&node->bits[0] ^ *(const u64 *)&key[0],
|
||||
*(const u64 *)&node->bits[8] ^ *(const u64 *)&key[8]);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool prefix_matches(const struct allowedips_node *node, const u8 *key,
|
||||
u8 bits)
|
||||
{
|
||||
/* This could be much faster if it actually just compared the common
|
||||
* bits properly, by precomputing a mask bswap(~0 << (32 - cidr)), and
|
||||
* the rest, but it turns out that common_bits is already super fast on
|
||||
* modern processors, even taking into account the unfortunate bswap.
|
||||
* So, we just inline it like this instead.
|
||||
*/
|
||||
return common_bits(node, key, bits) >= node->cidr;
|
||||
}
|
||||
|
||||
static struct allowedips_node *find_node(struct allowedips_node *trie, u8 bits,
|
||||
const u8 *key)
|
||||
{
|
||||
struct allowedips_node *node = trie, *found = NULL;
|
||||
|
||||
while (node && prefix_matches(node, key, bits)) {
|
||||
if (rcu_access_pointer(node->peer))
|
||||
found = node;
|
||||
if (node->cidr == bits)
|
||||
break;
|
||||
node = rcu_dereference_bh(CHOOSE_NODE(node, key));
|
||||
}
|
||||
return found;
|
||||
}
|
||||
|
||||
/* Returns a strong reference to a peer */
|
||||
static struct wg_peer *lookup(struct allowedips_node __rcu *root, u8 bits,
|
||||
const void *be_ip)
|
||||
{
|
||||
/* Aligned so it can be passed to fls/fls64 */
|
||||
u8 ip[16] __aligned(__alignof(u64));
|
||||
struct allowedips_node *node;
|
||||
struct wg_peer *peer = NULL;
|
||||
|
||||
swap_endian(ip, be_ip, bits);
|
||||
|
||||
rcu_read_lock_bh();
|
||||
retry:
|
||||
node = find_node(rcu_dereference_bh(root), bits, ip);
|
||||
if (node) {
|
||||
peer = wg_peer_get_maybe_zero(rcu_dereference_bh(node->peer));
|
||||
if (!peer)
|
||||
goto retry;
|
||||
}
|
||||
rcu_read_unlock_bh();
|
||||
return peer;
|
||||
}
|
||||
|
||||
static bool node_placement(struct allowedips_node __rcu *trie, const u8 *key,
|
||||
u8 cidr, u8 bits, struct allowedips_node **rnode,
|
||||
struct mutex *lock)
|
||||
{
|
||||
struct allowedips_node *node = rcu_dereference_protected(trie,
|
||||
lockdep_is_held(lock));
|
||||
struct allowedips_node *parent = NULL;
|
||||
bool exact = false;
|
||||
|
||||
while (node && node->cidr <= cidr && prefix_matches(node, key, bits)) {
|
||||
parent = node;
|
||||
if (parent->cidr == cidr) {
|
||||
exact = true;
|
||||
break;
|
||||
}
|
||||
node = rcu_dereference_protected(CHOOSE_NODE(parent, key),
|
||||
lockdep_is_held(lock));
|
||||
}
|
||||
*rnode = parent;
|
||||
return exact;
|
||||
}
|
||||
|
||||
static int add(struct allowedips_node __rcu **trie, u8 bits, const u8 *key,
|
||||
u8 cidr, struct wg_peer *peer, struct mutex *lock)
|
||||
{
|
||||
struct allowedips_node *node, *parent, *down, *newnode;
|
||||
|
||||
if (unlikely(cidr > bits || !peer))
|
||||
return -EINVAL;
|
||||
|
||||
if (!rcu_access_pointer(*trie)) {
|
||||
node = kzalloc(sizeof(*node), GFP_KERNEL);
|
||||
if (unlikely(!node))
|
||||
return -ENOMEM;
|
||||
RCU_INIT_POINTER(node->peer, peer);
|
||||
list_add_tail(&node->peer_list, &peer->allowedips_list);
|
||||
copy_and_assign_cidr(node, key, cidr, bits);
|
||||
rcu_assign_pointer(*trie, node);
|
||||
return 0;
|
||||
}
|
||||
if (node_placement(*trie, key, cidr, bits, &node, lock)) {
|
||||
rcu_assign_pointer(node->peer, peer);
|
||||
list_move_tail(&node->peer_list, &peer->allowedips_list);
|
||||
return 0;
|
||||
}
|
||||
|
||||
newnode = kzalloc(sizeof(*newnode), GFP_KERNEL);
|
||||
if (unlikely(!newnode))
|
||||
return -ENOMEM;
|
||||
RCU_INIT_POINTER(newnode->peer, peer);
|
||||
list_add_tail(&newnode->peer_list, &peer->allowedips_list);
|
||||
copy_and_assign_cidr(newnode, key, cidr, bits);
|
||||
|
||||
if (!node) {
|
||||
down = rcu_dereference_protected(*trie, lockdep_is_held(lock));
|
||||
} else {
|
||||
down = rcu_dereference_protected(CHOOSE_NODE(node, key),
|
||||
lockdep_is_held(lock));
|
||||
if (!down) {
|
||||
rcu_assign_pointer(CHOOSE_NODE(node, key), newnode);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
cidr = min(cidr, common_bits(down, key, bits));
|
||||
parent = node;
|
||||
|
||||
if (newnode->cidr == cidr) {
|
||||
rcu_assign_pointer(CHOOSE_NODE(newnode, down->bits), down);
|
||||
if (!parent)
|
||||
rcu_assign_pointer(*trie, newnode);
|
||||
else
|
||||
rcu_assign_pointer(CHOOSE_NODE(parent, newnode->bits),
|
||||
newnode);
|
||||
} else {
|
||||
node = kzalloc(sizeof(*node), GFP_KERNEL);
|
||||
if (unlikely(!node)) {
|
||||
kfree(newnode);
|
||||
return -ENOMEM;
|
||||
}
|
||||
INIT_LIST_HEAD(&node->peer_list);
|
||||
copy_and_assign_cidr(node, newnode->bits, cidr, bits);
|
||||
|
||||
rcu_assign_pointer(CHOOSE_NODE(node, down->bits), down);
|
||||
rcu_assign_pointer(CHOOSE_NODE(node, newnode->bits), newnode);
|
||||
if (!parent)
|
||||
rcu_assign_pointer(*trie, node);
|
||||
else
|
||||
rcu_assign_pointer(CHOOSE_NODE(parent, node->bits),
|
||||
node);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
void wg_allowedips_init(struct allowedips *table)
|
||||
{
|
||||
table->root4 = table->root6 = NULL;
|
||||
table->seq = 1;
|
||||
}
|
||||
|
||||
void wg_allowedips_free(struct allowedips *table, struct mutex *lock)
|
||||
{
|
||||
struct allowedips_node __rcu *old4 = table->root4, *old6 = table->root6;
|
||||
|
||||
++table->seq;
|
||||
RCU_INIT_POINTER(table->root4, NULL);
|
||||
RCU_INIT_POINTER(table->root6, NULL);
|
||||
if (rcu_access_pointer(old4)) {
|
||||
struct allowedips_node *node = rcu_dereference_protected(old4,
|
||||
lockdep_is_held(lock));
|
||||
|
||||
root_remove_peer_lists(node);
|
||||
call_rcu(&node->rcu, root_free_rcu);
|
||||
}
|
||||
if (rcu_access_pointer(old6)) {
|
||||
struct allowedips_node *node = rcu_dereference_protected(old6,
|
||||
lockdep_is_held(lock));
|
||||
|
||||
root_remove_peer_lists(node);
|
||||
call_rcu(&node->rcu, root_free_rcu);
|
||||
}
|
||||
}
|
||||
|
||||
int wg_allowedips_insert_v4(struct allowedips *table, const struct in_addr *ip,
|
||||
u8 cidr, struct wg_peer *peer, struct mutex *lock)
|
||||
{
|
||||
/* Aligned so it can be passed to fls */
|
||||
u8 key[4] __aligned(__alignof(u32));
|
||||
|
||||
++table->seq;
|
||||
swap_endian(key, (const u8 *)ip, 32);
|
||||
return add(&table->root4, 32, key, cidr, peer, lock);
|
||||
}
|
||||
|
||||
int wg_allowedips_insert_v6(struct allowedips *table, const struct in6_addr *ip,
|
||||
u8 cidr, struct wg_peer *peer, struct mutex *lock)
|
||||
{
|
||||
/* Aligned so it can be passed to fls64 */
|
||||
u8 key[16] __aligned(__alignof(u64));
|
||||
|
||||
++table->seq;
|
||||
swap_endian(key, (const u8 *)ip, 128);
|
||||
return add(&table->root6, 128, key, cidr, peer, lock);
|
||||
}
|
||||
|
||||
void wg_allowedips_remove_by_peer(struct allowedips *table,
|
||||
struct wg_peer *peer, struct mutex *lock)
|
||||
{
|
||||
++table->seq;
|
||||
walk_remove_by_peer(&table->root4, peer, lock);
|
||||
walk_remove_by_peer(&table->root6, peer, lock);
|
||||
}
|
||||
|
||||
int wg_allowedips_read_node(struct allowedips_node *node, u8 ip[16], u8 *cidr)
|
||||
{
|
||||
const unsigned int cidr_bytes = DIV_ROUND_UP(node->cidr, 8U);
|
||||
swap_endian(ip, node->bits, node->bitlen);
|
||||
memset(ip + cidr_bytes, 0, node->bitlen / 8U - cidr_bytes);
|
||||
if (node->cidr)
|
||||
ip[cidr_bytes - 1U] &= ~0U << (-node->cidr % 8U);
|
||||
|
||||
*cidr = node->cidr;
|
||||
return node->bitlen == 32 ? AF_INET : AF_INET6;
|
||||
}
|
||||
|
||||
/* Returns a strong reference to a peer */
|
||||
struct wg_peer *wg_allowedips_lookup_dst(struct allowedips *table,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
if (skb->protocol == htons(ETH_P_IP))
|
||||
return lookup(table->root4, 32, &ip_hdr(skb)->daddr);
|
||||
else if (skb->protocol == htons(ETH_P_IPV6))
|
||||
return lookup(table->root6, 128, &ipv6_hdr(skb)->daddr);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* Returns a strong reference to a peer */
|
||||
struct wg_peer *wg_allowedips_lookup_src(struct allowedips *table,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
if (skb->protocol == htons(ETH_P_IP))
|
||||
return lookup(table->root4, 32, &ip_hdr(skb)->saddr);
|
||||
else if (skb->protocol == htons(ETH_P_IPV6))
|
||||
return lookup(table->root6, 128, &ipv6_hdr(skb)->saddr);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
#include "selftest/allowedips.c"
|
||||
59
drivers/net/wireguard/allowedips.h
Normal file
59
drivers/net/wireguard/allowedips.h
Normal file
@@ -0,0 +1,59 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
||||
*/
|
||||
|
||||
#ifndef _WG_ALLOWEDIPS_H
|
||||
#define _WG_ALLOWEDIPS_H
|
||||
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/ip.h>
|
||||
#include <linux/ipv6.h>
|
||||
|
||||
struct wg_peer;
|
||||
|
||||
struct allowedips_node {
|
||||
struct wg_peer __rcu *peer;
|
||||
struct allowedips_node __rcu *bit[2];
|
||||
/* While it may seem scandalous that we waste space for v4,
|
||||
* we're alloc'ing to the nearest power of 2 anyway, so this
|
||||
* doesn't actually make a difference.
|
||||
*/
|
||||
u8 bits[16] __aligned(__alignof(u64));
|
||||
u8 cidr, bit_at_a, bit_at_b, bitlen;
|
||||
|
||||
/* Keep rarely used list at bottom to be beyond cache line. */
|
||||
union {
|
||||
struct list_head peer_list;
|
||||
struct rcu_head rcu;
|
||||
};
|
||||
};
|
||||
|
||||
struct allowedips {
|
||||
struct allowedips_node __rcu *root4;
|
||||
struct allowedips_node __rcu *root6;
|
||||
u64 seq;
|
||||
};
|
||||
|
||||
void wg_allowedips_init(struct allowedips *table);
|
||||
void wg_allowedips_free(struct allowedips *table, struct mutex *mutex);
|
||||
int wg_allowedips_insert_v4(struct allowedips *table, const struct in_addr *ip,
|
||||
u8 cidr, struct wg_peer *peer, struct mutex *lock);
|
||||
int wg_allowedips_insert_v6(struct allowedips *table, const struct in6_addr *ip,
|
||||
u8 cidr, struct wg_peer *peer, struct mutex *lock);
|
||||
void wg_allowedips_remove_by_peer(struct allowedips *table,
|
||||
struct wg_peer *peer, struct mutex *lock);
|
||||
/* The ip input pointer should be __aligned(__alignof(u64))) */
|
||||
int wg_allowedips_read_node(struct allowedips_node *node, u8 ip[16], u8 *cidr);
|
||||
|
||||
/* These return a strong reference to a peer: */
|
||||
struct wg_peer *wg_allowedips_lookup_dst(struct allowedips *table,
|
||||
struct sk_buff *skb);
|
||||
struct wg_peer *wg_allowedips_lookup_src(struct allowedips *table,
|
||||
struct sk_buff *skb);
|
||||
|
||||
#ifdef DEBUG
|
||||
bool wg_allowedips_selftest(void);
|
||||
#endif
|
||||
|
||||
#endif /* _WG_ALLOWEDIPS_H */
|
||||
236
drivers/net/wireguard/cookie.c
Normal file
236
drivers/net/wireguard/cookie.c
Normal file
@@ -0,0 +1,236 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
||||
*/
|
||||
|
||||
#include "cookie.h"
|
||||
#include "peer.h"
|
||||
#include "device.h"
|
||||
#include "messages.h"
|
||||
#include "ratelimiter.h"
|
||||
#include "timers.h"
|
||||
|
||||
#include <crypto/blake2s.h>
|
||||
#include <crypto/chacha20poly1305.h>
|
||||
|
||||
#include <net/ipv6.h>
|
||||
#include <crypto/algapi.h>
|
||||
|
||||
void wg_cookie_checker_init(struct cookie_checker *checker,
|
||||
struct wg_device *wg)
|
||||
{
|
||||
init_rwsem(&checker->secret_lock);
|
||||
checker->secret_birthdate = ktime_get_coarse_boottime_ns();
|
||||
get_random_bytes(checker->secret, NOISE_HASH_LEN);
|
||||
checker->device = wg;
|
||||
}
|
||||
|
||||
enum { COOKIE_KEY_LABEL_LEN = 8 };
|
||||
static const u8 mac1_key_label[COOKIE_KEY_LABEL_LEN] = "mac1----";
|
||||
static const u8 cookie_key_label[COOKIE_KEY_LABEL_LEN] = "cookie--";
|
||||
|
||||
static void precompute_key(u8 key[NOISE_SYMMETRIC_KEY_LEN],
|
||||
const u8 pubkey[NOISE_PUBLIC_KEY_LEN],
|
||||
const u8 label[COOKIE_KEY_LABEL_LEN])
|
||||
{
|
||||
struct blake2s_state blake;
|
||||
|
||||
blake2s_init(&blake, NOISE_SYMMETRIC_KEY_LEN);
|
||||
blake2s_update(&blake, label, COOKIE_KEY_LABEL_LEN);
|
||||
blake2s_update(&blake, pubkey, NOISE_PUBLIC_KEY_LEN);
|
||||
blake2s_final(&blake, key);
|
||||
}
|
||||
|
||||
/* Must hold peer->handshake.static_identity->lock */
|
||||
void wg_cookie_checker_precompute_device_keys(struct cookie_checker *checker)
|
||||
{
|
||||
if (likely(checker->device->static_identity.has_identity)) {
|
||||
precompute_key(checker->cookie_encryption_key,
|
||||
checker->device->static_identity.static_public,
|
||||
cookie_key_label);
|
||||
precompute_key(checker->message_mac1_key,
|
||||
checker->device->static_identity.static_public,
|
||||
mac1_key_label);
|
||||
} else {
|
||||
memset(checker->cookie_encryption_key, 0,
|
||||
NOISE_SYMMETRIC_KEY_LEN);
|
||||
memset(checker->message_mac1_key, 0, NOISE_SYMMETRIC_KEY_LEN);
|
||||
}
|
||||
}
|
||||
|
||||
void wg_cookie_checker_precompute_peer_keys(struct wg_peer *peer)
|
||||
{
|
||||
precompute_key(peer->latest_cookie.cookie_decryption_key,
|
||||
peer->handshake.remote_static, cookie_key_label);
|
||||
precompute_key(peer->latest_cookie.message_mac1_key,
|
||||
peer->handshake.remote_static, mac1_key_label);
|
||||
}
|
||||
|
||||
void wg_cookie_init(struct cookie *cookie)
|
||||
{
|
||||
memset(cookie, 0, sizeof(*cookie));
|
||||
init_rwsem(&cookie->lock);
|
||||
}
|
||||
|
||||
static void compute_mac1(u8 mac1[COOKIE_LEN], const void *message, size_t len,
|
||||
const u8 key[NOISE_SYMMETRIC_KEY_LEN])
|
||||
{
|
||||
len = len - sizeof(struct message_macs) +
|
||||
offsetof(struct message_macs, mac1);
|
||||
blake2s(mac1, message, key, COOKIE_LEN, len, NOISE_SYMMETRIC_KEY_LEN);
|
||||
}
|
||||
|
||||
static void compute_mac2(u8 mac2[COOKIE_LEN], const void *message, size_t len,
|
||||
const u8 cookie[COOKIE_LEN])
|
||||
{
|
||||
len = len - sizeof(struct message_macs) +
|
||||
offsetof(struct message_macs, mac2);
|
||||
blake2s(mac2, message, cookie, COOKIE_LEN, len, COOKIE_LEN);
|
||||
}
|
||||
|
||||
static void make_cookie(u8 cookie[COOKIE_LEN], struct sk_buff *skb,
|
||||
struct cookie_checker *checker)
|
||||
{
|
||||
struct blake2s_state state;
|
||||
|
||||
if (wg_birthdate_has_expired(checker->secret_birthdate,
|
||||
COOKIE_SECRET_MAX_AGE)) {
|
||||
down_write(&checker->secret_lock);
|
||||
checker->secret_birthdate = ktime_get_coarse_boottime_ns();
|
||||
get_random_bytes(checker->secret, NOISE_HASH_LEN);
|
||||
up_write(&checker->secret_lock);
|
||||
}
|
||||
|
||||
down_read(&checker->secret_lock);
|
||||
|
||||
blake2s_init_key(&state, COOKIE_LEN, checker->secret, NOISE_HASH_LEN);
|
||||
if (skb->protocol == htons(ETH_P_IP))
|
||||
blake2s_update(&state, (u8 *)&ip_hdr(skb)->saddr,
|
||||
sizeof(struct in_addr));
|
||||
else if (skb->protocol == htons(ETH_P_IPV6))
|
||||
blake2s_update(&state, (u8 *)&ipv6_hdr(skb)->saddr,
|
||||
sizeof(struct in6_addr));
|
||||
blake2s_update(&state, (u8 *)&udp_hdr(skb)->source, sizeof(__be16));
|
||||
blake2s_final(&state, cookie);
|
||||
|
||||
up_read(&checker->secret_lock);
|
||||
}
|
||||
|
||||
enum cookie_mac_state wg_cookie_validate_packet(struct cookie_checker *checker,
|
||||
struct sk_buff *skb,
|
||||
bool check_cookie)
|
||||
{
|
||||
struct message_macs *macs = (struct message_macs *)
|
||||
(skb->data + skb->len - sizeof(*macs));
|
||||
enum cookie_mac_state ret;
|
||||
u8 computed_mac[COOKIE_LEN];
|
||||
u8 cookie[COOKIE_LEN];
|
||||
|
||||
ret = INVALID_MAC;
|
||||
compute_mac1(computed_mac, skb->data, skb->len,
|
||||
checker->message_mac1_key);
|
||||
if (crypto_memneq(computed_mac, macs->mac1, COOKIE_LEN))
|
||||
goto out;
|
||||
|
||||
ret = VALID_MAC_BUT_NO_COOKIE;
|
||||
|
||||
if (!check_cookie)
|
||||
goto out;
|
||||
|
||||
make_cookie(cookie, skb, checker);
|
||||
|
||||
compute_mac2(computed_mac, skb->data, skb->len, cookie);
|
||||
if (crypto_memneq(computed_mac, macs->mac2, COOKIE_LEN))
|
||||
goto out;
|
||||
|
||||
ret = VALID_MAC_WITH_COOKIE_BUT_RATELIMITED;
|
||||
if (!wg_ratelimiter_allow(skb, dev_net(checker->device->dev)))
|
||||
goto out;
|
||||
|
||||
ret = VALID_MAC_WITH_COOKIE;
|
||||
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
void wg_cookie_add_mac_to_packet(void *message, size_t len,
|
||||
struct wg_peer *peer)
|
||||
{
|
||||
struct message_macs *macs = (struct message_macs *)
|
||||
((u8 *)message + len - sizeof(*macs));
|
||||
|
||||
down_write(&peer->latest_cookie.lock);
|
||||
compute_mac1(macs->mac1, message, len,
|
||||
peer->latest_cookie.message_mac1_key);
|
||||
memcpy(peer->latest_cookie.last_mac1_sent, macs->mac1, COOKIE_LEN);
|
||||
peer->latest_cookie.have_sent_mac1 = true;
|
||||
up_write(&peer->latest_cookie.lock);
|
||||
|
||||
down_read(&peer->latest_cookie.lock);
|
||||
if (peer->latest_cookie.is_valid &&
|
||||
!wg_birthdate_has_expired(peer->latest_cookie.birthdate,
|
||||
COOKIE_SECRET_MAX_AGE - COOKIE_SECRET_LATENCY))
|
||||
compute_mac2(macs->mac2, message, len,
|
||||
peer->latest_cookie.cookie);
|
||||
else
|
||||
memset(macs->mac2, 0, COOKIE_LEN);
|
||||
up_read(&peer->latest_cookie.lock);
|
||||
}
|
||||
|
||||
void wg_cookie_message_create(struct message_handshake_cookie *dst,
|
||||
struct sk_buff *skb, __le32 index,
|
||||
struct cookie_checker *checker)
|
||||
{
|
||||
struct message_macs *macs = (struct message_macs *)
|
||||
((u8 *)skb->data + skb->len - sizeof(*macs));
|
||||
u8 cookie[COOKIE_LEN];
|
||||
|
||||
dst->header.type = cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE);
|
||||
dst->receiver_index = index;
|
||||
get_random_bytes_wait(dst->nonce, COOKIE_NONCE_LEN);
|
||||
|
||||
make_cookie(cookie, skb, checker);
|
||||
xchacha20poly1305_encrypt(dst->encrypted_cookie, cookie, COOKIE_LEN,
|
||||
macs->mac1, COOKIE_LEN, dst->nonce,
|
||||
checker->cookie_encryption_key);
|
||||
}
|
||||
|
||||
void wg_cookie_message_consume(struct message_handshake_cookie *src,
|
||||
struct wg_device *wg)
|
||||
{
|
||||
struct wg_peer *peer = NULL;
|
||||
u8 cookie[COOKIE_LEN];
|
||||
bool ret;
|
||||
|
||||
if (unlikely(!wg_index_hashtable_lookup(wg->index_hashtable,
|
||||
INDEX_HASHTABLE_HANDSHAKE |
|
||||
INDEX_HASHTABLE_KEYPAIR,
|
||||
src->receiver_index, &peer)))
|
||||
return;
|
||||
|
||||
down_read(&peer->latest_cookie.lock);
|
||||
if (unlikely(!peer->latest_cookie.have_sent_mac1)) {
|
||||
up_read(&peer->latest_cookie.lock);
|
||||
goto out;
|
||||
}
|
||||
ret = xchacha20poly1305_decrypt(
|
||||
cookie, src->encrypted_cookie, sizeof(src->encrypted_cookie),
|
||||
peer->latest_cookie.last_mac1_sent, COOKIE_LEN, src->nonce,
|
||||
peer->latest_cookie.cookie_decryption_key);
|
||||
up_read(&peer->latest_cookie.lock);
|
||||
|
||||
if (ret) {
|
||||
down_write(&peer->latest_cookie.lock);
|
||||
memcpy(peer->latest_cookie.cookie, cookie, COOKIE_LEN);
|
||||
peer->latest_cookie.birthdate = ktime_get_coarse_boottime_ns();
|
||||
peer->latest_cookie.is_valid = true;
|
||||
peer->latest_cookie.have_sent_mac1 = false;
|
||||
up_write(&peer->latest_cookie.lock);
|
||||
} else {
|
||||
net_dbg_ratelimited("%s: Could not decrypt invalid cookie response\n",
|
||||
wg->dev->name);
|
||||
}
|
||||
|
||||
out:
|
||||
wg_peer_put(peer);
|
||||
}
|
||||
59
drivers/net/wireguard/cookie.h
Normal file
59
drivers/net/wireguard/cookie.h
Normal file
@@ -0,0 +1,59 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
||||
*/
|
||||
|
||||
#ifndef _WG_COOKIE_H
|
||||
#define _WG_COOKIE_H
|
||||
|
||||
#include "messages.h"
|
||||
#include <linux/rwsem.h>
|
||||
|
||||
struct wg_peer;
|
||||
|
||||
struct cookie_checker {
|
||||
u8 secret[NOISE_HASH_LEN];
|
||||
u8 cookie_encryption_key[NOISE_SYMMETRIC_KEY_LEN];
|
||||
u8 message_mac1_key[NOISE_SYMMETRIC_KEY_LEN];
|
||||
u64 secret_birthdate;
|
||||
struct rw_semaphore secret_lock;
|
||||
struct wg_device *device;
|
||||
};
|
||||
|
||||
struct cookie {
|
||||
u64 birthdate;
|
||||
bool is_valid;
|
||||
u8 cookie[COOKIE_LEN];
|
||||
bool have_sent_mac1;
|
||||
u8 last_mac1_sent[COOKIE_LEN];
|
||||
u8 cookie_decryption_key[NOISE_SYMMETRIC_KEY_LEN];
|
||||
u8 message_mac1_key[NOISE_SYMMETRIC_KEY_LEN];
|
||||
struct rw_semaphore lock;
|
||||
};
|
||||
|
||||
enum cookie_mac_state {
|
||||
INVALID_MAC,
|
||||
VALID_MAC_BUT_NO_COOKIE,
|
||||
VALID_MAC_WITH_COOKIE_BUT_RATELIMITED,
|
||||
VALID_MAC_WITH_COOKIE
|
||||
};
|
||||
|
||||
void wg_cookie_checker_init(struct cookie_checker *checker,
|
||||
struct wg_device *wg);
|
||||
void wg_cookie_checker_precompute_device_keys(struct cookie_checker *checker);
|
||||
void wg_cookie_checker_precompute_peer_keys(struct wg_peer *peer);
|
||||
void wg_cookie_init(struct cookie *cookie);
|
||||
|
||||
enum cookie_mac_state wg_cookie_validate_packet(struct cookie_checker *checker,
|
||||
struct sk_buff *skb,
|
||||
bool check_cookie);
|
||||
void wg_cookie_add_mac_to_packet(void *message, size_t len,
|
||||
struct wg_peer *peer);
|
||||
|
||||
void wg_cookie_message_create(struct message_handshake_cookie *src,
|
||||
struct sk_buff *skb, __le32 index,
|
||||
struct cookie_checker *checker);
|
||||
void wg_cookie_message_consume(struct message_handshake_cookie *src,
|
||||
struct wg_device *wg);
|
||||
|
||||
#endif /* _WG_COOKIE_H */
|
||||
458
drivers/net/wireguard/device.c
Normal file
458
drivers/net/wireguard/device.c
Normal file
@@ -0,0 +1,458 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
||||
*/
|
||||
|
||||
#include "queueing.h"
|
||||
#include "socket.h"
|
||||
#include "timers.h"
|
||||
#include "device.h"
|
||||
#include "ratelimiter.h"
|
||||
#include "peer.h"
|
||||
#include "messages.h"
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/rtnetlink.h>
|
||||
#include <linux/inet.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/inetdevice.h>
|
||||
#include <linux/if_arp.h>
|
||||
#include <linux/icmp.h>
|
||||
#include <linux/suspend.h>
|
||||
#include <net/icmp.h>
|
||||
#include <net/rtnetlink.h>
|
||||
#include <net/ip_tunnels.h>
|
||||
#include <net/addrconf.h>
|
||||
|
||||
static LIST_HEAD(device_list);
|
||||
|
||||
static int wg_open(struct net_device *dev)
|
||||
{
|
||||
struct in_device *dev_v4 = __in_dev_get_rtnl(dev);
|
||||
struct inet6_dev *dev_v6 = __in6_dev_get(dev);
|
||||
struct wg_device *wg = netdev_priv(dev);
|
||||
struct wg_peer *peer;
|
||||
int ret;
|
||||
|
||||
if (dev_v4) {
|
||||
/* At some point we might put this check near the ip_rt_send_
|
||||
* redirect call of ip_forward in net/ipv4/ip_forward.c, similar
|
||||
* to the current secpath check.
|
||||
*/
|
||||
IN_DEV_CONF_SET(dev_v4, SEND_REDIRECTS, false);
|
||||
IPV4_DEVCONF_ALL(dev_net(dev), SEND_REDIRECTS) = false;
|
||||
}
|
||||
if (dev_v6)
|
||||
dev_v6->cnf.addr_gen_mode = IN6_ADDR_GEN_MODE_NONE;
|
||||
|
||||
ret = wg_socket_init(wg, wg->incoming_port);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
mutex_lock(&wg->device_update_lock);
|
||||
list_for_each_entry(peer, &wg->peer_list, peer_list) {
|
||||
wg_packet_send_staged_packets(peer);
|
||||
if (peer->persistent_keepalive_interval)
|
||||
wg_packet_send_keepalive(peer);
|
||||
}
|
||||
mutex_unlock(&wg->device_update_lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
static int wg_pm_notification(struct notifier_block *nb, unsigned long action,
|
||||
void *data)
|
||||
{
|
||||
struct wg_device *wg;
|
||||
struct wg_peer *peer;
|
||||
|
||||
/* If the machine is constantly suspending and resuming, as part of
|
||||
* its normal operation rather than as a somewhat rare event, then we
|
||||
* don't actually want to clear keys.
|
||||
*/
|
||||
if (IS_ENABLED(CONFIG_PM_AUTOSLEEP) || IS_ENABLED(CONFIG_ANDROID))
|
||||
return 0;
|
||||
|
||||
if (action != PM_HIBERNATION_PREPARE && action != PM_SUSPEND_PREPARE)
|
||||
return 0;
|
||||
|
||||
rtnl_lock();
|
||||
list_for_each_entry(wg, &device_list, device_list) {
|
||||
mutex_lock(&wg->device_update_lock);
|
||||
list_for_each_entry(peer, &wg->peer_list, peer_list) {
|
||||
del_timer(&peer->timer_zero_key_material);
|
||||
wg_noise_handshake_clear(&peer->handshake);
|
||||
wg_noise_keypairs_clear(&peer->keypairs);
|
||||
}
|
||||
mutex_unlock(&wg->device_update_lock);
|
||||
}
|
||||
rtnl_unlock();
|
||||
rcu_barrier();
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct notifier_block pm_notifier = { .notifier_call = wg_pm_notification };
|
||||
#endif
|
||||
|
||||
static int wg_stop(struct net_device *dev)
|
||||
{
|
||||
struct wg_device *wg = netdev_priv(dev);
|
||||
struct wg_peer *peer;
|
||||
|
||||
mutex_lock(&wg->device_update_lock);
|
||||
list_for_each_entry(peer, &wg->peer_list, peer_list) {
|
||||
wg_packet_purge_staged_packets(peer);
|
||||
wg_timers_stop(peer);
|
||||
wg_noise_handshake_clear(&peer->handshake);
|
||||
wg_noise_keypairs_clear(&peer->keypairs);
|
||||
wg_noise_reset_last_sent_handshake(&peer->last_sent_handshake);
|
||||
}
|
||||
mutex_unlock(&wg->device_update_lock);
|
||||
skb_queue_purge(&wg->incoming_handshakes);
|
||||
wg_socket_reinit(wg, NULL, NULL);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
{
|
||||
struct wg_device *wg = netdev_priv(dev);
|
||||
struct sk_buff_head packets;
|
||||
struct wg_peer *peer;
|
||||
struct sk_buff *next;
|
||||
sa_family_t family;
|
||||
u32 mtu;
|
||||
int ret;
|
||||
|
||||
if (unlikely(wg_skb_examine_untrusted_ip_hdr(skb) != skb->protocol)) {
|
||||
ret = -EPROTONOSUPPORT;
|
||||
net_dbg_ratelimited("%s: Invalid IP packet\n", dev->name);
|
||||
goto err;
|
||||
}
|
||||
|
||||
peer = wg_allowedips_lookup_dst(&wg->peer_allowedips, skb);
|
||||
if (unlikely(!peer)) {
|
||||
ret = -ENOKEY;
|
||||
if (skb->protocol == htons(ETH_P_IP))
|
||||
net_dbg_ratelimited("%s: No peer has allowed IPs matching %pI4\n",
|
||||
dev->name, &ip_hdr(skb)->daddr);
|
||||
else if (skb->protocol == htons(ETH_P_IPV6))
|
||||
net_dbg_ratelimited("%s: No peer has allowed IPs matching %pI6\n",
|
||||
dev->name, &ipv6_hdr(skb)->daddr);
|
||||
goto err;
|
||||
}
|
||||
|
||||
family = READ_ONCE(peer->endpoint.addr.sa_family);
|
||||
if (unlikely(family != AF_INET && family != AF_INET6)) {
|
||||
ret = -EDESTADDRREQ;
|
||||
net_dbg_ratelimited("%s: No valid endpoint has been configured or discovered for peer %llu\n",
|
||||
dev->name, peer->internal_id);
|
||||
goto err_peer;
|
||||
}
|
||||
|
||||
mtu = skb_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu;
|
||||
|
||||
__skb_queue_head_init(&packets);
|
||||
if (!skb_is_gso(skb)) {
|
||||
skb_mark_not_on_list(skb);
|
||||
} else {
|
||||
struct sk_buff *segs = skb_gso_segment(skb, 0);
|
||||
|
||||
if (unlikely(IS_ERR(segs))) {
|
||||
ret = PTR_ERR(segs);
|
||||
goto err_peer;
|
||||
}
|
||||
dev_kfree_skb(skb);
|
||||
skb = segs;
|
||||
}
|
||||
|
||||
skb_list_walk_safe(skb, skb, next) {
|
||||
skb_mark_not_on_list(skb);
|
||||
|
||||
skb = skb_share_check(skb, GFP_ATOMIC);
|
||||
if (unlikely(!skb))
|
||||
continue;
|
||||
|
||||
/* We only need to keep the original dst around for icmp,
|
||||
* so at this point we're in a position to drop it.
|
||||
*/
|
||||
skb_dst_drop(skb);
|
||||
|
||||
PACKET_CB(skb)->mtu = mtu;
|
||||
|
||||
__skb_queue_tail(&packets, skb);
|
||||
}
|
||||
|
||||
spin_lock_bh(&peer->staged_packet_queue.lock);
|
||||
/* If the queue is getting too big, we start removing the oldest packets
|
||||
* until it's small again. We do this before adding the new packet, so
|
||||
* we don't remove GSO segments that are in excess.
|
||||
*/
|
||||
while (skb_queue_len(&peer->staged_packet_queue) > MAX_STAGED_PACKETS) {
|
||||
dev_kfree_skb(__skb_dequeue(&peer->staged_packet_queue));
|
||||
++dev->stats.tx_dropped;
|
||||
}
|
||||
skb_queue_splice_tail(&packets, &peer->staged_packet_queue);
|
||||
spin_unlock_bh(&peer->staged_packet_queue.lock);
|
||||
|
||||
wg_packet_send_staged_packets(peer);
|
||||
|
||||
wg_peer_put(peer);
|
||||
return NETDEV_TX_OK;
|
||||
|
||||
err_peer:
|
||||
wg_peer_put(peer);
|
||||
err:
|
||||
++dev->stats.tx_errors;
|
||||
if (skb->protocol == htons(ETH_P_IP))
|
||||
icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0);
|
||||
else if (skb->protocol == htons(ETH_P_IPV6))
|
||||
icmpv6_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0);
|
||||
kfree_skb(skb);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct net_device_ops netdev_ops = {
|
||||
.ndo_open = wg_open,
|
||||
.ndo_stop = wg_stop,
|
||||
.ndo_start_xmit = wg_xmit,
|
||||
.ndo_get_stats64 = ip_tunnel_get_stats64
|
||||
};
|
||||
|
||||
static void wg_destruct(struct net_device *dev)
|
||||
{
|
||||
struct wg_device *wg = netdev_priv(dev);
|
||||
|
||||
rtnl_lock();
|
||||
list_del(&wg->device_list);
|
||||
rtnl_unlock();
|
||||
mutex_lock(&wg->device_update_lock);
|
||||
wg->incoming_port = 0;
|
||||
wg_socket_reinit(wg, NULL, NULL);
|
||||
/* The final references are cleared in the below calls to destroy_workqueue. */
|
||||
wg_peer_remove_all(wg);
|
||||
destroy_workqueue(wg->handshake_receive_wq);
|
||||
destroy_workqueue(wg->handshake_send_wq);
|
||||
destroy_workqueue(wg->packet_crypt_wq);
|
||||
wg_packet_queue_free(&wg->decrypt_queue, true);
|
||||
wg_packet_queue_free(&wg->encrypt_queue, true);
|
||||
rcu_barrier(); /* Wait for all the peers to be actually freed. */
|
||||
wg_ratelimiter_uninit();
|
||||
memzero_explicit(&wg->static_identity, sizeof(wg->static_identity));
|
||||
skb_queue_purge(&wg->incoming_handshakes);
|
||||
free_percpu(dev->tstats);
|
||||
free_percpu(wg->incoming_handshakes_worker);
|
||||
if (wg->have_creating_net_ref)
|
||||
put_net(wg->creating_net);
|
||||
kvfree(wg->index_hashtable);
|
||||
kvfree(wg->peer_hashtable);
|
||||
mutex_unlock(&wg->device_update_lock);
|
||||
|
||||
pr_debug("%s: Interface deleted\n", dev->name);
|
||||
free_netdev(dev);
|
||||
}
|
||||
|
||||
static const struct device_type device_type = { .name = KBUILD_MODNAME };
|
||||
|
||||
static void wg_setup(struct net_device *dev)
|
||||
{
|
||||
struct wg_device *wg = netdev_priv(dev);
|
||||
enum { WG_NETDEV_FEATURES = NETIF_F_HW_CSUM | NETIF_F_RXCSUM |
|
||||
NETIF_F_SG | NETIF_F_GSO |
|
||||
NETIF_F_GSO_SOFTWARE | NETIF_F_HIGHDMA };
|
||||
|
||||
dev->netdev_ops = &netdev_ops;
|
||||
dev->hard_header_len = 0;
|
||||
dev->addr_len = 0;
|
||||
dev->needed_headroom = DATA_PACKET_HEAD_ROOM;
|
||||
dev->needed_tailroom = noise_encrypted_len(MESSAGE_PADDING_MULTIPLE);
|
||||
dev->type = ARPHRD_NONE;
|
||||
dev->flags = IFF_POINTOPOINT | IFF_NOARP;
|
||||
dev->priv_flags |= IFF_NO_QUEUE;
|
||||
dev->features |= NETIF_F_LLTX;
|
||||
dev->features |= WG_NETDEV_FEATURES;
|
||||
dev->hw_features |= WG_NETDEV_FEATURES;
|
||||
dev->hw_enc_features |= WG_NETDEV_FEATURES;
|
||||
dev->mtu = ETH_DATA_LEN - MESSAGE_MINIMUM_LENGTH -
|
||||
sizeof(struct udphdr) -
|
||||
max(sizeof(struct ipv6hdr), sizeof(struct iphdr));
|
||||
|
||||
SET_NETDEV_DEVTYPE(dev, &device_type);
|
||||
|
||||
/* We need to keep the dst around in case of icmp replies. */
|
||||
netif_keep_dst(dev);
|
||||
|
||||
memset(wg, 0, sizeof(*wg));
|
||||
wg->dev = dev;
|
||||
}
|
||||
|
||||
static int wg_newlink(struct net *src_net, struct net_device *dev,
|
||||
struct nlattr *tb[], struct nlattr *data[],
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
struct wg_device *wg = netdev_priv(dev);
|
||||
int ret = -ENOMEM;
|
||||
|
||||
wg->creating_net = src_net;
|
||||
init_rwsem(&wg->static_identity.lock);
|
||||
mutex_init(&wg->socket_update_lock);
|
||||
mutex_init(&wg->device_update_lock);
|
||||
skb_queue_head_init(&wg->incoming_handshakes);
|
||||
wg_allowedips_init(&wg->peer_allowedips);
|
||||
wg_cookie_checker_init(&wg->cookie_checker, wg);
|
||||
INIT_LIST_HEAD(&wg->peer_list);
|
||||
wg->device_update_gen = 1;
|
||||
|
||||
wg->peer_hashtable = wg_pubkey_hashtable_alloc();
|
||||
if (!wg->peer_hashtable)
|
||||
return ret;
|
||||
|
||||
wg->index_hashtable = wg_index_hashtable_alloc();
|
||||
if (!wg->index_hashtable)
|
||||
goto err_free_peer_hashtable;
|
||||
|
||||
dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
|
||||
if (!dev->tstats)
|
||||
goto err_free_index_hashtable;
|
||||
|
||||
wg->incoming_handshakes_worker =
|
||||
wg_packet_percpu_multicore_worker_alloc(
|
||||
wg_packet_handshake_receive_worker, wg);
|
||||
if (!wg->incoming_handshakes_worker)
|
||||
goto err_free_tstats;
|
||||
|
||||
wg->handshake_receive_wq = alloc_workqueue("wg-kex-%s",
|
||||
WQ_CPU_INTENSIVE | WQ_FREEZABLE, 0, dev->name);
|
||||
if (!wg->handshake_receive_wq)
|
||||
goto err_free_incoming_handshakes;
|
||||
|
||||
wg->handshake_send_wq = alloc_workqueue("wg-kex-%s",
|
||||
WQ_UNBOUND | WQ_FREEZABLE, 0, dev->name);
|
||||
if (!wg->handshake_send_wq)
|
||||
goto err_destroy_handshake_receive;
|
||||
|
||||
wg->packet_crypt_wq = alloc_workqueue("wg-crypt-%s",
|
||||
WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 0, dev->name);
|
||||
if (!wg->packet_crypt_wq)
|
||||
goto err_destroy_handshake_send;
|
||||
|
||||
ret = wg_packet_queue_init(&wg->encrypt_queue, wg_packet_encrypt_worker,
|
||||
true, MAX_QUEUED_PACKETS);
|
||||
if (ret < 0)
|
||||
goto err_destroy_packet_crypt;
|
||||
|
||||
ret = wg_packet_queue_init(&wg->decrypt_queue, wg_packet_decrypt_worker,
|
||||
true, MAX_QUEUED_PACKETS);
|
||||
if (ret < 0)
|
||||
goto err_free_encrypt_queue;
|
||||
|
||||
ret = wg_ratelimiter_init();
|
||||
if (ret < 0)
|
||||
goto err_free_decrypt_queue;
|
||||
|
||||
ret = register_netdevice(dev);
|
||||
if (ret < 0)
|
||||
goto err_uninit_ratelimiter;
|
||||
|
||||
list_add(&wg->device_list, &device_list);
|
||||
|
||||
/* We wait until the end to assign priv_destructor, so that
|
||||
* register_netdevice doesn't call it for us if it fails.
|
||||
*/
|
||||
dev->priv_destructor = wg_destruct;
|
||||
|
||||
pr_debug("%s: Interface created\n", dev->name);
|
||||
return ret;
|
||||
|
||||
err_uninit_ratelimiter:
|
||||
wg_ratelimiter_uninit();
|
||||
err_free_decrypt_queue:
|
||||
wg_packet_queue_free(&wg->decrypt_queue, true);
|
||||
err_free_encrypt_queue:
|
||||
wg_packet_queue_free(&wg->encrypt_queue, true);
|
||||
err_destroy_packet_crypt:
|
||||
destroy_workqueue(wg->packet_crypt_wq);
|
||||
err_destroy_handshake_send:
|
||||
destroy_workqueue(wg->handshake_send_wq);
|
||||
err_destroy_handshake_receive:
|
||||
destroy_workqueue(wg->handshake_receive_wq);
|
||||
err_free_incoming_handshakes:
|
||||
free_percpu(wg->incoming_handshakes_worker);
|
||||
err_free_tstats:
|
||||
free_percpu(dev->tstats);
|
||||
err_free_index_hashtable:
|
||||
kvfree(wg->index_hashtable);
|
||||
err_free_peer_hashtable:
|
||||
kvfree(wg->peer_hashtable);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct rtnl_link_ops link_ops __read_mostly = {
|
||||
.kind = KBUILD_MODNAME,
|
||||
.priv_size = sizeof(struct wg_device),
|
||||
.setup = wg_setup,
|
||||
.newlink = wg_newlink,
|
||||
};
|
||||
|
||||
static int wg_netdevice_notification(struct notifier_block *nb,
|
||||
unsigned long action, void *data)
|
||||
{
|
||||
struct net_device *dev = ((struct netdev_notifier_info *)data)->dev;
|
||||
struct wg_device *wg = netdev_priv(dev);
|
||||
|
||||
ASSERT_RTNL();
|
||||
|
||||
if (action != NETDEV_REGISTER || dev->netdev_ops != &netdev_ops)
|
||||
return 0;
|
||||
|
||||
if (dev_net(dev) == wg->creating_net && wg->have_creating_net_ref) {
|
||||
put_net(wg->creating_net);
|
||||
wg->have_creating_net_ref = false;
|
||||
} else if (dev_net(dev) != wg->creating_net &&
|
||||
!wg->have_creating_net_ref) {
|
||||
wg->have_creating_net_ref = true;
|
||||
get_net(wg->creating_net);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct notifier_block netdevice_notifier = {
|
||||
.notifier_call = wg_netdevice_notification
|
||||
};
|
||||
|
||||
int __init wg_device_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
ret = register_pm_notifier(&pm_notifier);
|
||||
if (ret)
|
||||
return ret;
|
||||
#endif
|
||||
|
||||
ret = register_netdevice_notifier(&netdevice_notifier);
|
||||
if (ret)
|
||||
goto error_pm;
|
||||
|
||||
ret = rtnl_link_register(&link_ops);
|
||||
if (ret)
|
||||
goto error_netdevice;
|
||||
|
||||
return 0;
|
||||
|
||||
error_netdevice:
|
||||
unregister_netdevice_notifier(&netdevice_notifier);
|
||||
error_pm:
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
unregister_pm_notifier(&pm_notifier);
|
||||
#endif
|
||||
return ret;
|
||||
}
|
||||
|
||||
void wg_device_uninit(void)
|
||||
{
|
||||
rtnl_link_unregister(&link_ops);
|
||||
unregister_netdevice_notifier(&netdevice_notifier);
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
unregister_pm_notifier(&pm_notifier);
|
||||
#endif
|
||||
rcu_barrier();
|
||||
}
|
||||
73
drivers/net/wireguard/device.h
Normal file
73
drivers/net/wireguard/device.h
Normal file
@@ -0,0 +1,73 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
||||
*/
|
||||
|
||||
#ifndef _WG_DEVICE_H
|
||||
#define _WG_DEVICE_H
|
||||
|
||||
#include "noise.h"
|
||||
#include "allowedips.h"
|
||||
#include "peerlookup.h"
|
||||
#include "cookie.h"
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/workqueue.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/net.h>
|
||||
#include <linux/ptr_ring.h>
|
||||
|
||||
struct wg_device;
|
||||
|
||||
struct multicore_worker {
|
||||
void *ptr;
|
||||
struct work_struct work;
|
||||
};
|
||||
|
||||
struct crypt_queue {
|
||||
struct ptr_ring ring;
|
||||
union {
|
||||
struct {
|
||||
struct multicore_worker __percpu *worker;
|
||||
int last_cpu;
|
||||
};
|
||||
struct work_struct work;
|
||||
};
|
||||
};
|
||||
|
||||
struct wg_device {
|
||||
struct net_device *dev;
|
||||
struct crypt_queue encrypt_queue, decrypt_queue;
|
||||
struct sock __rcu *sock4, *sock6;
|
||||
struct net *creating_net;
|
||||
struct noise_static_identity static_identity;
|
||||
struct workqueue_struct *handshake_receive_wq, *handshake_send_wq;
|
||||
struct workqueue_struct *packet_crypt_wq;
|
||||
struct sk_buff_head incoming_handshakes;
|
||||
int incoming_handshake_cpu;
|
||||
struct multicore_worker __percpu *incoming_handshakes_worker;
|
||||
struct cookie_checker cookie_checker;
|
||||
struct pubkey_hashtable *peer_hashtable;
|
||||
struct index_hashtable *index_hashtable;
|
||||
struct allowedips peer_allowedips;
|
||||
struct mutex device_update_lock, socket_update_lock;
|
||||
struct list_head device_list, peer_list;
|
||||
unsigned int num_peers, device_update_gen;
|
||||
u32 fwmark;
|
||||
u16 incoming_port;
|
||||
bool have_creating_net_ref;
|
||||
};
|
||||
|
||||
int wg_device_init(void);
|
||||
void wg_device_uninit(void);
|
||||
|
||||
/* Later after the dust settles, this can be moved into include/linux/skbuff.h,
|
||||
* where virtually all code that deals with GSO segs can benefit, around ~30
|
||||
* drivers as of writing.
|
||||
*/
|
||||
#define skb_list_walk_safe(first, skb, next) \
|
||||
for (skb = first, next = skb->next; skb; \
|
||||
skb = next, next = skb ? skb->next : NULL)
|
||||
|
||||
#endif /* _WG_DEVICE_H */
|
||||
64
drivers/net/wireguard/main.c
Normal file
64
drivers/net/wireguard/main.c
Normal file
@@ -0,0 +1,64 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
||||
*/
|
||||
|
||||
#include "version.h"
|
||||
#include "device.h"
|
||||
#include "noise.h"
|
||||
#include "queueing.h"
|
||||
#include "ratelimiter.h"
|
||||
#include "netlink.h"
|
||||
|
||||
#include <uapi/linux/wireguard.h>
|
||||
|
||||
#include <linux/version.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/genetlink.h>
|
||||
#include <net/rtnetlink.h>
|
||||
|
||||
static int __init mod_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
#ifdef DEBUG
|
||||
if (!wg_allowedips_selftest() || !wg_packet_counter_selftest() ||
|
||||
!wg_ratelimiter_selftest())
|
||||
return -ENOTRECOVERABLE;
|
||||
#endif
|
||||
wg_noise_init();
|
||||
|
||||
ret = wg_device_init();
|
||||
if (ret < 0)
|
||||
goto err_device;
|
||||
|
||||
ret = wg_genetlink_init();
|
||||
if (ret < 0)
|
||||
goto err_netlink;
|
||||
|
||||
pr_info("WireGuard " WIREGUARD_VERSION " loaded. See www.wireguard.com for information.\n");
|
||||
pr_info("Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.\n");
|
||||
|
||||
return 0;
|
||||
|
||||
err_netlink:
|
||||
wg_device_uninit();
|
||||
err_device:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __exit mod_exit(void)
|
||||
{
|
||||
wg_genetlink_uninit();
|
||||
wg_device_uninit();
|
||||
}
|
||||
|
||||
module_init(mod_init);
|
||||
module_exit(mod_exit);
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_DESCRIPTION("WireGuard secure network tunnel");
|
||||
MODULE_AUTHOR("Jason A. Donenfeld <Jason@zx2c4.com>");
|
||||
MODULE_VERSION(WIREGUARD_VERSION);
|
||||
MODULE_ALIAS_RTNL_LINK(KBUILD_MODNAME);
|
||||
MODULE_ALIAS_GENL_FAMILY(WG_GENL_NAME);
|
||||
128
drivers/net/wireguard/messages.h
Normal file
128
drivers/net/wireguard/messages.h
Normal file
@@ -0,0 +1,128 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
||||
*/
|
||||
|
||||
#ifndef _WG_MESSAGES_H
|
||||
#define _WG_MESSAGES_H
|
||||
|
||||
#include <crypto/curve25519.h>
|
||||
#include <crypto/chacha20poly1305.h>
|
||||
#include <crypto/blake2s.h>
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/param.h>
|
||||
#include <linux/skbuff.h>
|
||||
|
||||
enum noise_lengths {
|
||||
NOISE_PUBLIC_KEY_LEN = CURVE25519_KEY_SIZE,
|
||||
NOISE_SYMMETRIC_KEY_LEN = CHACHA20POLY1305_KEY_SIZE,
|
||||
NOISE_TIMESTAMP_LEN = sizeof(u64) + sizeof(u32),
|
||||
NOISE_AUTHTAG_LEN = CHACHA20POLY1305_AUTHTAG_SIZE,
|
||||
NOISE_HASH_LEN = BLAKE2S_HASH_SIZE
|
||||
};
|
||||
|
||||
#define noise_encrypted_len(plain_len) ((plain_len) + NOISE_AUTHTAG_LEN)
|
||||
|
||||
enum cookie_values {
|
||||
COOKIE_SECRET_MAX_AGE = 2 * 60,
|
||||
COOKIE_SECRET_LATENCY = 5,
|
||||
COOKIE_NONCE_LEN = XCHACHA20POLY1305_NONCE_SIZE,
|
||||
COOKIE_LEN = 16
|
||||
};
|
||||
|
||||
enum counter_values {
|
||||
COUNTER_BITS_TOTAL = 2048,
|
||||
COUNTER_REDUNDANT_BITS = BITS_PER_LONG,
|
||||
COUNTER_WINDOW_SIZE = COUNTER_BITS_TOTAL - COUNTER_REDUNDANT_BITS
|
||||
};
|
||||
|
||||
enum limits {
|
||||
REKEY_AFTER_MESSAGES = 1ULL << 60,
|
||||
REJECT_AFTER_MESSAGES = U64_MAX - COUNTER_WINDOW_SIZE - 1,
|
||||
REKEY_TIMEOUT = 5,
|
||||
REKEY_TIMEOUT_JITTER_MAX_JIFFIES = HZ / 3,
|
||||
REKEY_AFTER_TIME = 120,
|
||||
REJECT_AFTER_TIME = 180,
|
||||
INITIATIONS_PER_SECOND = 50,
|
||||
MAX_PEERS_PER_DEVICE = 1U << 20,
|
||||
KEEPALIVE_TIMEOUT = 10,
|
||||
MAX_TIMER_HANDSHAKES = 90 / REKEY_TIMEOUT,
|
||||
MAX_QUEUED_INCOMING_HANDSHAKES = 4096, /* TODO: replace this with DQL */
|
||||
MAX_STAGED_PACKETS = 128,
|
||||
MAX_QUEUED_PACKETS = 1024 /* TODO: replace this with DQL */
|
||||
};
|
||||
|
||||
enum message_type {
|
||||
MESSAGE_INVALID = 0,
|
||||
MESSAGE_HANDSHAKE_INITIATION = 1,
|
||||
MESSAGE_HANDSHAKE_RESPONSE = 2,
|
||||
MESSAGE_HANDSHAKE_COOKIE = 3,
|
||||
MESSAGE_DATA = 4
|
||||
};
|
||||
|
||||
struct message_header {
|
||||
/* The actual layout of this that we want is:
|
||||
* u8 type
|
||||
* u8 reserved_zero[3]
|
||||
*
|
||||
* But it turns out that by encoding this as little endian,
|
||||
* we achieve the same thing, and it makes checking faster.
|
||||
*/
|
||||
__le32 type;
|
||||
};
|
||||
|
||||
struct message_macs {
|
||||
u8 mac1[COOKIE_LEN];
|
||||
u8 mac2[COOKIE_LEN];
|
||||
};
|
||||
|
||||
struct message_handshake_initiation {
|
||||
struct message_header header;
|
||||
__le32 sender_index;
|
||||
u8 unencrypted_ephemeral[NOISE_PUBLIC_KEY_LEN];
|
||||
u8 encrypted_static[noise_encrypted_len(NOISE_PUBLIC_KEY_LEN)];
|
||||
u8 encrypted_timestamp[noise_encrypted_len(NOISE_TIMESTAMP_LEN)];
|
||||
struct message_macs macs;
|
||||
};
|
||||
|
||||
struct message_handshake_response {
|
||||
struct message_header header;
|
||||
__le32 sender_index;
|
||||
__le32 receiver_index;
|
||||
u8 unencrypted_ephemeral[NOISE_PUBLIC_KEY_LEN];
|
||||
u8 encrypted_nothing[noise_encrypted_len(0)];
|
||||
struct message_macs macs;
|
||||
};
|
||||
|
||||
struct message_handshake_cookie {
|
||||
struct message_header header;
|
||||
__le32 receiver_index;
|
||||
u8 nonce[COOKIE_NONCE_LEN];
|
||||
u8 encrypted_cookie[noise_encrypted_len(COOKIE_LEN)];
|
||||
};
|
||||
|
||||
struct message_data {
|
||||
struct message_header header;
|
||||
__le32 key_idx;
|
||||
__le64 counter;
|
||||
u8 encrypted_data[];
|
||||
};
|
||||
|
||||
#define message_data_len(plain_len) \
|
||||
(noise_encrypted_len(plain_len) + sizeof(struct message_data))
|
||||
|
||||
enum message_alignments {
|
||||
MESSAGE_PADDING_MULTIPLE = 16,
|
||||
MESSAGE_MINIMUM_LENGTH = message_data_len(0)
|
||||
};
|
||||
|
||||
#define SKB_HEADER_LEN \
|
||||
(max(sizeof(struct iphdr), sizeof(struct ipv6hdr)) + \
|
||||
sizeof(struct udphdr) + NET_SKB_PAD)
|
||||
#define DATA_PACKET_HEAD_ROOM \
|
||||
ALIGN(sizeof(struct message_data) + SKB_HEADER_LEN, 4)
|
||||
|
||||
enum { HANDSHAKE_DSCP = 0x88 /* AF41, plus 00 ECN */ };
|
||||
|
||||
#endif /* _WG_MESSAGES_H */
|
||||
642
drivers/net/wireguard/netlink.c
Normal file
642
drivers/net/wireguard/netlink.c
Normal file
File diff suppressed because it is too large
Load Diff
12
drivers/net/wireguard/netlink.h
Normal file
12
drivers/net/wireguard/netlink.h
Normal file
@@ -0,0 +1,12 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
||||
*/
|
||||
|
||||
#ifndef _WG_NETLINK_H
|
||||
#define _WG_NETLINK_H
|
||||
|
||||
int wg_genetlink_init(void);
|
||||
void wg_genetlink_uninit(void);
|
||||
|
||||
#endif /* _WG_NETLINK_H */
|
||||
828
drivers/net/wireguard/noise.c
Normal file
828
drivers/net/wireguard/noise.c
Normal file
File diff suppressed because it is too large
Load Diff
137
drivers/net/wireguard/noise.h
Normal file
137
drivers/net/wireguard/noise.h
Normal file
@@ -0,0 +1,137 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
||||
*/
|
||||
#ifndef _WG_NOISE_H
|
||||
#define _WG_NOISE_H
|
||||
|
||||
#include "messages.h"
|
||||
#include "peerlookup.h"
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/rwsem.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/kref.h>
|
||||
|
||||
union noise_counter {
|
||||
struct {
|
||||
u64 counter;
|
||||
unsigned long backtrack[COUNTER_BITS_TOTAL / BITS_PER_LONG];
|
||||
spinlock_t lock;
|
||||
} receive;
|
||||
atomic64_t counter;
|
||||
};
|
||||
|
||||
struct noise_symmetric_key {
|
||||
u8 key[NOISE_SYMMETRIC_KEY_LEN];
|
||||
union noise_counter counter;
|
||||
u64 birthdate;
|
||||
bool is_valid;
|
||||
};
|
||||
|
||||
struct noise_keypair {
|
||||
struct index_hashtable_entry entry;
|
||||
struct noise_symmetric_key sending;
|
||||
struct noise_symmetric_key receiving;
|
||||
__le32 remote_index;
|
||||
bool i_am_the_initiator;
|
||||
struct kref refcount;
|
||||
struct rcu_head rcu;
|
||||
u64 internal_id;
|
||||
};
|
||||
|
||||
struct noise_keypairs {
|
||||
struct noise_keypair __rcu *current_keypair;
|
||||
struct noise_keypair __rcu *previous_keypair;
|
||||
struct noise_keypair __rcu *next_keypair;
|
||||
spinlock_t keypair_update_lock;
|
||||
};
|
||||
|
||||
struct noise_static_identity {
|
||||
u8 static_public[NOISE_PUBLIC_KEY_LEN];
|
||||
u8 static_private[NOISE_PUBLIC_KEY_LEN];
|
||||
struct rw_semaphore lock;
|
||||
bool has_identity;
|
||||
};
|
||||
|
||||
enum noise_handshake_state {
|
||||
HANDSHAKE_ZEROED,
|
||||
HANDSHAKE_CREATED_INITIATION,
|
||||
HANDSHAKE_CONSUMED_INITIATION,
|
||||
HANDSHAKE_CREATED_RESPONSE,
|
||||
HANDSHAKE_CONSUMED_RESPONSE
|
||||
};
|
||||
|
||||
struct noise_handshake {
|
||||
struct index_hashtable_entry entry;
|
||||
|
||||
enum noise_handshake_state state;
|
||||
u64 last_initiation_consumption;
|
||||
|
||||
struct noise_static_identity *static_identity;
|
||||
|
||||
u8 ephemeral_private[NOISE_PUBLIC_KEY_LEN];
|
||||
u8 remote_static[NOISE_PUBLIC_KEY_LEN];
|
||||
u8 remote_ephemeral[NOISE_PUBLIC_KEY_LEN];
|
||||
u8 precomputed_static_static[NOISE_PUBLIC_KEY_LEN];
|
||||
|
||||
u8 preshared_key[NOISE_SYMMETRIC_KEY_LEN];
|
||||
|
||||
u8 hash[NOISE_HASH_LEN];
|
||||
u8 chaining_key[NOISE_HASH_LEN];
|
||||
|
||||
u8 latest_timestamp[NOISE_TIMESTAMP_LEN];
|
||||
__le32 remote_index;
|
||||
|
||||
/* Protects all members except the immutable (after noise_handshake_
|
||||
* init): remote_static, precomputed_static_static, static_identity.
|
||||
*/
|
||||
struct rw_semaphore lock;
|
||||
};
|
||||
|
||||
struct wg_device;
|
||||
|
||||
void wg_noise_init(void);
|
||||
bool wg_noise_handshake_init(struct noise_handshake *handshake,
|
||||
struct noise_static_identity *static_identity,
|
||||
const u8 peer_public_key[NOISE_PUBLIC_KEY_LEN],
|
||||
const u8 peer_preshared_key[NOISE_SYMMETRIC_KEY_LEN],
|
||||
struct wg_peer *peer);
|
||||
void wg_noise_handshake_clear(struct noise_handshake *handshake);
|
||||
static inline void wg_noise_reset_last_sent_handshake(atomic64_t *handshake_ns)
|
||||
{
|
||||
atomic64_set(handshake_ns, ktime_get_coarse_boottime_ns() -
|
||||
(u64)(REKEY_TIMEOUT + 1) * NSEC_PER_SEC);
|
||||
}
|
||||
|
||||
void wg_noise_keypair_put(struct noise_keypair *keypair, bool unreference_now);
|
||||
struct noise_keypair *wg_noise_keypair_get(struct noise_keypair *keypair);
|
||||
void wg_noise_keypairs_clear(struct noise_keypairs *keypairs);
|
||||
bool wg_noise_received_with_keypair(struct noise_keypairs *keypairs,
|
||||
struct noise_keypair *received_keypair);
|
||||
void wg_noise_expire_current_peer_keypairs(struct wg_peer *peer);
|
||||
|
||||
void wg_noise_set_static_identity_private_key(
|
||||
struct noise_static_identity *static_identity,
|
||||
const u8 private_key[NOISE_PUBLIC_KEY_LEN]);
|
||||
bool wg_noise_precompute_static_static(struct wg_peer *peer);
|
||||
|
||||
bool
|
||||
wg_noise_handshake_create_initiation(struct message_handshake_initiation *dst,
|
||||
struct noise_handshake *handshake);
|
||||
struct wg_peer *
|
||||
wg_noise_handshake_consume_initiation(struct message_handshake_initiation *src,
|
||||
struct wg_device *wg);
|
||||
|
||||
bool wg_noise_handshake_create_response(struct message_handshake_response *dst,
|
||||
struct noise_handshake *handshake);
|
||||
struct wg_peer *
|
||||
wg_noise_handshake_consume_response(struct message_handshake_response *src,
|
||||
struct wg_device *wg);
|
||||
|
||||
bool wg_noise_handshake_begin_session(struct noise_handshake *handshake,
|
||||
struct noise_keypairs *keypairs);
|
||||
|
||||
#endif /* _WG_NOISE_H */
|
||||
240
drivers/net/wireguard/peer.c
Normal file
240
drivers/net/wireguard/peer.c
Normal file
@@ -0,0 +1,240 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
||||
*/
|
||||
|
||||
#include "peer.h"
|
||||
#include "device.h"
|
||||
#include "queueing.h"
|
||||
#include "timers.h"
|
||||
#include "peerlookup.h"
|
||||
#include "noise.h"
|
||||
|
||||
#include <linux/kref.h>
|
||||
#include <linux/lockdep.h>
|
||||
#include <linux/rcupdate.h>
|
||||
#include <linux/list.h>
|
||||
|
||||
static atomic64_t peer_counter = ATOMIC64_INIT(0);
|
||||
|
||||
struct wg_peer *wg_peer_create(struct wg_device *wg,
|
||||
const u8 public_key[NOISE_PUBLIC_KEY_LEN],
|
||||
const u8 preshared_key[NOISE_SYMMETRIC_KEY_LEN])
|
||||
{
|
||||
struct wg_peer *peer;
|
||||
int ret = -ENOMEM;
|
||||
|
||||
lockdep_assert_held(&wg->device_update_lock);
|
||||
|
||||
if (wg->num_peers >= MAX_PEERS_PER_DEVICE)
|
||||
return ERR_PTR(ret);
|
||||
|
||||
peer = kzalloc(sizeof(*peer), GFP_KERNEL);
|
||||
if (unlikely(!peer))
|
||||
return ERR_PTR(ret);
|
||||
peer->device = wg;
|
||||
|
||||
if (!wg_noise_handshake_init(&peer->handshake, &wg->static_identity,
|
||||
public_key, preshared_key, peer)) {
|
||||
ret = -EKEYREJECTED;
|
||||
goto err_1;
|
||||
}
|
||||
if (dst_cache_init(&peer->endpoint_cache, GFP_KERNEL))
|
||||
goto err_1;
|
||||
if (wg_packet_queue_init(&peer->tx_queue, wg_packet_tx_worker, false,
|
||||
MAX_QUEUED_PACKETS))
|
||||
goto err_2;
|
||||
if (wg_packet_queue_init(&peer->rx_queue, NULL, false,
|
||||
MAX_QUEUED_PACKETS))
|
||||
goto err_3;
|
||||
|
||||
peer->internal_id = atomic64_inc_return(&peer_counter);
|
||||
peer->serial_work_cpu = nr_cpumask_bits;
|
||||
wg_cookie_init(&peer->latest_cookie);
|
||||
wg_timers_init(peer);
|
||||
wg_cookie_checker_precompute_peer_keys(peer);
|
||||
spin_lock_init(&peer->keypairs.keypair_update_lock);
|
||||
INIT_WORK(&peer->transmit_handshake_work,
|
||||
wg_packet_handshake_send_worker);
|
||||
rwlock_init(&peer->endpoint_lock);
|
||||
kref_init(&peer->refcount);
|
||||
skb_queue_head_init(&peer->staged_packet_queue);
|
||||
wg_noise_reset_last_sent_handshake(&peer->last_sent_handshake);
|
||||
set_bit(NAPI_STATE_NO_BUSY_POLL, &peer->napi.state);
|
||||
netif_napi_add(wg->dev, &peer->napi, wg_packet_rx_poll,
|
||||
NAPI_POLL_WEIGHT);
|
||||
napi_enable(&peer->napi);
|
||||
list_add_tail(&peer->peer_list, &wg->peer_list);
|
||||
INIT_LIST_HEAD(&peer->allowedips_list);
|
||||
wg_pubkey_hashtable_add(wg->peer_hashtable, peer);
|
||||
++wg->num_peers;
|
||||
pr_debug("%s: Peer %llu created\n", wg->dev->name, peer->internal_id);
|
||||
return peer;
|
||||
|
||||
err_3:
|
||||
wg_packet_queue_free(&peer->tx_queue, false);
|
||||
err_2:
|
||||
dst_cache_destroy(&peer->endpoint_cache);
|
||||
err_1:
|
||||
kfree(peer);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
struct wg_peer *wg_peer_get_maybe_zero(struct wg_peer *peer)
|
||||
{
|
||||
RCU_LOCKDEP_WARN(!rcu_read_lock_bh_held(),
|
||||
"Taking peer reference without holding the RCU read lock");
|
||||
if (unlikely(!peer || !kref_get_unless_zero(&peer->refcount)))
|
||||
return NULL;
|
||||
return peer;
|
||||
}
|
||||
|
||||
static void peer_make_dead(struct wg_peer *peer)
|
||||
{
|
||||
/* Remove from configuration-time lookup structures. */
|
||||
list_del_init(&peer->peer_list);
|
||||
wg_allowedips_remove_by_peer(&peer->device->peer_allowedips, peer,
|
||||
&peer->device->device_update_lock);
|
||||
wg_pubkey_hashtable_remove(peer->device->peer_hashtable, peer);
|
||||
|
||||
/* Mark as dead, so that we don't allow jumping contexts after. */
|
||||
WRITE_ONCE(peer->is_dead, true);
|
||||
|
||||
/* The caller must now synchronize_rcu() for this to take effect. */
|
||||
}
|
||||
|
||||
static void peer_remove_after_dead(struct wg_peer *peer)
|
||||
{
|
||||
WARN_ON(!peer->is_dead);
|
||||
|
||||
/* No more keypairs can be created for this peer, since is_dead protects
|
||||
* add_new_keypair, so we can now destroy existing ones.
|
||||
*/
|
||||
wg_noise_keypairs_clear(&peer->keypairs);
|
||||
|
||||
/* Destroy all ongoing timers that were in-flight at the beginning of
|
||||
* this function.
|
||||
*/
|
||||
wg_timers_stop(peer);
|
||||
|
||||
/* The transition between packet encryption/decryption queues isn't
|
||||
* guarded by is_dead, but each reference's life is strictly bounded by
|
||||
* two generations: once for parallel crypto and once for serial
|
||||
* ingestion, so we can simply flush twice, and be sure that we no
|
||||
* longer have references inside these queues.
|
||||
*/
|
||||
|
||||
/* a) For encrypt/decrypt. */
|
||||
flush_workqueue(peer->device->packet_crypt_wq);
|
||||
/* b.1) For send (but not receive, since that's napi). */
|
||||
flush_workqueue(peer->device->packet_crypt_wq);
|
||||
/* b.2.1) For receive (but not send, since that's wq). */
|
||||
napi_disable(&peer->napi);
|
||||
/* b.2.1) It's now safe to remove the napi struct, which must be done
|
||||
* here from process context.
|
||||
*/
|
||||
netif_napi_del(&peer->napi);
|
||||
|
||||
/* Ensure any workstructs we own (like transmit_handshake_work or
|
||||
* clear_peer_work) no longer are in use.
|
||||
*/
|
||||
flush_workqueue(peer->device->handshake_send_wq);
|
||||
|
||||
/* After the above flushes, a peer might still be active in a few
|
||||
* different contexts: 1) from xmit(), before hitting is_dead and
|
||||
* returning, 2) from wg_packet_consume_data(), before hitting is_dead
|
||||
* and returning, 3) from wg_receive_handshake_packet() after a point
|
||||
* where it has processed an incoming handshake packet, but where
|
||||
* all calls to pass it off to timers fails because of is_dead. We won't
|
||||
* have new references in (1) eventually, because we're removed from
|
||||
* allowedips; we won't have new references in (2) eventually, because
|
||||
* wg_index_hashtable_lookup will always return NULL, since we removed
|
||||
* all existing keypairs and no more can be created; we won't have new
|
||||
* references in (3) eventually, because we're removed from the pubkey
|
||||
* hash table, which allows for a maximum of one handshake response,
|
||||
* via the still-uncleared index hashtable entry, but not more than one,
|
||||
* and in wg_cookie_message_consume, the lookup eventually gets a peer
|
||||
* with a refcount of zero, so no new reference is taken.
|
||||
*/
|
||||
|
||||
--peer->device->num_peers;
|
||||
wg_peer_put(peer);
|
||||
}
|
||||
|
||||
/* We have a separate "remove" function make sure that all active places where
|
||||
* a peer is currently operating will eventually come to an end and not pass
|
||||
* their reference onto another context.
|
||||
*/
|
||||
void wg_peer_remove(struct wg_peer *peer)
|
||||
{
|
||||
if (unlikely(!peer))
|
||||
return;
|
||||
lockdep_assert_held(&peer->device->device_update_lock);
|
||||
|
||||
peer_make_dead(peer);
|
||||
synchronize_rcu();
|
||||
peer_remove_after_dead(peer);
|
||||
}
|
||||
|
||||
void wg_peer_remove_all(struct wg_device *wg)
|
||||
{
|
||||
struct wg_peer *peer, *temp;
|
||||
LIST_HEAD(dead_peers);
|
||||
|
||||
lockdep_assert_held(&wg->device_update_lock);
|
||||
|
||||
/* Avoid having to traverse individually for each one. */
|
||||
wg_allowedips_free(&wg->peer_allowedips, &wg->device_update_lock);
|
||||
|
||||
list_for_each_entry_safe(peer, temp, &wg->peer_list, peer_list) {
|
||||
peer_make_dead(peer);
|
||||
list_add_tail(&peer->peer_list, &dead_peers);
|
||||
}
|
||||
synchronize_rcu();
|
||||
list_for_each_entry_safe(peer, temp, &dead_peers, peer_list)
|
||||
peer_remove_after_dead(peer);
|
||||
}
|
||||
|
||||
static void rcu_release(struct rcu_head *rcu)
|
||||
{
|
||||
struct wg_peer *peer = container_of(rcu, struct wg_peer, rcu);
|
||||
|
||||
dst_cache_destroy(&peer->endpoint_cache);
|
||||
wg_packet_queue_free(&peer->rx_queue, false);
|
||||
wg_packet_queue_free(&peer->tx_queue, false);
|
||||
|
||||
/* The final zeroing takes care of clearing any remaining handshake key
|
||||
* material and other potentially sensitive information.
|
||||
*/
|
||||
kzfree(peer);
|
||||
}
|
||||
|
||||
static void kref_release(struct kref *refcount)
|
||||
{
|
||||
struct wg_peer *peer = container_of(refcount, struct wg_peer, refcount);
|
||||
|
||||
pr_debug("%s: Peer %llu (%pISpfsc) destroyed\n",
|
||||
peer->device->dev->name, peer->internal_id,
|
||||
&peer->endpoint.addr);
|
||||
|
||||
/* Remove ourself from dynamic runtime lookup structures, now that the
|
||||
* last reference is gone.
|
||||
*/
|
||||
wg_index_hashtable_remove(peer->device->index_hashtable,
|
||||
&peer->handshake.entry);
|
||||
|
||||
/* Remove any lingering packets that didn't have a chance to be
|
||||
* transmitted.
|
||||
*/
|
||||
wg_packet_purge_staged_packets(peer);
|
||||
|
||||
/* Free the memory used. */
|
||||
call_rcu(&peer->rcu, rcu_release);
|
||||
}
|
||||
|
||||
void wg_peer_put(struct wg_peer *peer)
|
||||
{
|
||||
if (unlikely(!peer))
|
||||
return;
|
||||
kref_put(&peer->refcount, kref_release);
|
||||
}
|
||||
83
drivers/net/wireguard/peer.h
Normal file
83
drivers/net/wireguard/peer.h
Normal file
@@ -0,0 +1,83 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
||||
*/
|
||||
|
||||
#ifndef _WG_PEER_H
|
||||
#define _WG_PEER_H
|
||||
|
||||
#include "device.h"
|
||||
#include "noise.h"
|
||||
#include "cookie.h"
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/netfilter.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/kref.h>
|
||||
#include <net/dst_cache.h>
|
||||
|
||||
struct wg_device;
|
||||
|
||||
struct endpoint {
|
||||
union {
|
||||
struct sockaddr addr;
|
||||
struct sockaddr_in addr4;
|
||||
struct sockaddr_in6 addr6;
|
||||
};
|
||||
union {
|
||||
struct {
|
||||
struct in_addr src4;
|
||||
/* Essentially the same as addr6->scope_id */
|
||||
int src_if4;
|
||||
};
|
||||
struct in6_addr src6;
|
||||
};
|
||||
};
|
||||
|
||||
struct wg_peer {
|
||||
struct wg_device *device;
|
||||
struct crypt_queue tx_queue, rx_queue;
|
||||
struct sk_buff_head staged_packet_queue;
|
||||
int serial_work_cpu;
|
||||
struct noise_keypairs keypairs;
|
||||
struct endpoint endpoint;
|
||||
struct dst_cache endpoint_cache;
|
||||
rwlock_t endpoint_lock;
|
||||
struct noise_handshake handshake;
|
||||
atomic64_t last_sent_handshake;
|
||||
struct work_struct transmit_handshake_work, clear_peer_work;
|
||||
struct cookie latest_cookie;
|
||||
struct hlist_node pubkey_hash;
|
||||
u64 rx_bytes, tx_bytes;
|
||||
struct timer_list timer_retransmit_handshake, timer_send_keepalive;
|
||||
struct timer_list timer_new_handshake, timer_zero_key_material;
|
||||
struct timer_list timer_persistent_keepalive;
|
||||
unsigned int timer_handshake_attempts;
|
||||
u16 persistent_keepalive_interval;
|
||||
bool timer_need_another_keepalive;
|
||||
bool sent_lastminute_handshake;
|
||||
struct timespec64 walltime_last_handshake;
|
||||
struct kref refcount;
|
||||
struct rcu_head rcu;
|
||||
struct list_head peer_list;
|
||||
struct list_head allowedips_list;
|
||||
u64 internal_id;
|
||||
struct napi_struct napi;
|
||||
bool is_dead;
|
||||
};
|
||||
|
||||
struct wg_peer *wg_peer_create(struct wg_device *wg,
|
||||
const u8 public_key[NOISE_PUBLIC_KEY_LEN],
|
||||
const u8 preshared_key[NOISE_SYMMETRIC_KEY_LEN]);
|
||||
|
||||
struct wg_peer *__must_check wg_peer_get_maybe_zero(struct wg_peer *peer);
|
||||
static inline struct wg_peer *wg_peer_get(struct wg_peer *peer)
|
||||
{
|
||||
kref_get(&peer->refcount);
|
||||
return peer;
|
||||
}
|
||||
void wg_peer_put(struct wg_peer *peer);
|
||||
void wg_peer_remove(struct wg_peer *peer);
|
||||
void wg_peer_remove_all(struct wg_device *wg);
|
||||
|
||||
#endif /* _WG_PEER_H */
|
||||
221
drivers/net/wireguard/peerlookup.c
Normal file
221
drivers/net/wireguard/peerlookup.c
Normal file
@@ -0,0 +1,221 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
||||
*/
|
||||
|
||||
#include "peerlookup.h"
|
||||
#include "peer.h"
|
||||
#include "noise.h"
|
||||
|
||||
static struct hlist_head *pubkey_bucket(struct pubkey_hashtable *table,
|
||||
const u8 pubkey[NOISE_PUBLIC_KEY_LEN])
|
||||
{
|
||||
/* siphash gives us a secure 64bit number based on a random key. Since
|
||||
* the bits are uniformly distributed, we can then mask off to get the
|
||||
* bits we need.
|
||||
*/
|
||||
const u64 hash = siphash(pubkey, NOISE_PUBLIC_KEY_LEN, &table->key);
|
||||
|
||||
return &table->hashtable[hash & (HASH_SIZE(table->hashtable) - 1)];
|
||||
}
|
||||
|
||||
struct pubkey_hashtable *wg_pubkey_hashtable_alloc(void)
|
||||
{
|
||||
struct pubkey_hashtable *table = kvmalloc(sizeof(*table), GFP_KERNEL);
|
||||
|
||||
if (!table)
|
||||
return NULL;
|
||||
|
||||
get_random_bytes(&table->key, sizeof(table->key));
|
||||
hash_init(table->hashtable);
|
||||
mutex_init(&table->lock);
|
||||
return table;
|
||||
}
|
||||
|
||||
void wg_pubkey_hashtable_add(struct pubkey_hashtable *table,
|
||||
struct wg_peer *peer)
|
||||
{
|
||||
mutex_lock(&table->lock);
|
||||
hlist_add_head_rcu(&peer->pubkey_hash,
|
||||
pubkey_bucket(table, peer->handshake.remote_static));
|
||||
mutex_unlock(&table->lock);
|
||||
}
|
||||
|
||||
void wg_pubkey_hashtable_remove(struct pubkey_hashtable *table,
|
||||
struct wg_peer *peer)
|
||||
{
|
||||
mutex_lock(&table->lock);
|
||||
hlist_del_init_rcu(&peer->pubkey_hash);
|
||||
mutex_unlock(&table->lock);
|
||||
}
|
||||
|
||||
/* Returns a strong reference to a peer */
|
||||
struct wg_peer *
|
||||
wg_pubkey_hashtable_lookup(struct pubkey_hashtable *table,
|
||||
const u8 pubkey[NOISE_PUBLIC_KEY_LEN])
|
||||
{
|
||||
struct wg_peer *iter_peer, *peer = NULL;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
hlist_for_each_entry_rcu_bh(iter_peer, pubkey_bucket(table, pubkey),
|
||||
pubkey_hash) {
|
||||
if (!memcmp(pubkey, iter_peer->handshake.remote_static,
|
||||
NOISE_PUBLIC_KEY_LEN)) {
|
||||
peer = iter_peer;
|
||||
break;
|
||||
}
|
||||
}
|
||||
peer = wg_peer_get_maybe_zero(peer);
|
||||
rcu_read_unlock_bh();
|
||||
return peer;
|
||||
}
|
||||
|
||||
static struct hlist_head *index_bucket(struct index_hashtable *table,
|
||||
const __le32 index)
|
||||
{
|
||||
/* Since the indices are random and thus all bits are uniformly
|
||||
* distributed, we can find its bucket simply by masking.
|
||||
*/
|
||||
return &table->hashtable[(__force u32)index &
|
||||
(HASH_SIZE(table->hashtable) - 1)];
|
||||
}
|
||||
|
||||
struct index_hashtable *wg_index_hashtable_alloc(void)
|
||||
{
|
||||
struct index_hashtable *table = kvmalloc(sizeof(*table), GFP_KERNEL);
|
||||
|
||||
if (!table)
|
||||
return NULL;
|
||||
|
||||
hash_init(table->hashtable);
|
||||
spin_lock_init(&table->lock);
|
||||
return table;
|
||||
}
|
||||
|
||||
/* At the moment, we limit ourselves to 2^20 total peers, which generally might
|
||||
* amount to 2^20*3 items in this hashtable. The algorithm below works by
|
||||
* picking a random number and testing it. We can see that these limits mean we
|
||||
* usually succeed pretty quickly:
|
||||
*
|
||||
* >>> def calculation(tries, size):
|
||||
* ... return (size / 2**32)**(tries - 1) * (1 - (size / 2**32))
|
||||
* ...
|
||||
* >>> calculation(1, 2**20 * 3)
|
||||
* 0.999267578125
|
||||
* >>> calculation(2, 2**20 * 3)
|
||||
* 0.0007318854331970215
|
||||
* >>> calculation(3, 2**20 * 3)
|
||||
* 5.360489012673497e-07
|
||||
* >>> calculation(4, 2**20 * 3)
|
||||
* 3.9261394135792216e-10
|
||||
*
|
||||
* At the moment, we don't do any masking, so this algorithm isn't exactly
|
||||
* constant time in either the random guessing or in the hash list lookup. We
|
||||
* could require a minimum of 3 tries, which would successfully mask the
|
||||
* guessing. this would not, however, help with the growing hash lengths, which
|
||||
* is another thing to consider moving forward.
|
||||
*/
|
||||
|
||||
__le32 wg_index_hashtable_insert(struct index_hashtable *table,
|
||||
struct index_hashtable_entry *entry)
|
||||
{
|
||||
struct index_hashtable_entry *existing_entry;
|
||||
|
||||
spin_lock_bh(&table->lock);
|
||||
hlist_del_init_rcu(&entry->index_hash);
|
||||
spin_unlock_bh(&table->lock);
|
||||
|
||||
rcu_read_lock_bh();
|
||||
|
||||
search_unused_slot:
|
||||
/* First we try to find an unused slot, randomly, while unlocked. */
|
||||
entry->index = (__force __le32)get_random_u32();
|
||||
hlist_for_each_entry_rcu_bh(existing_entry,
|
||||
index_bucket(table, entry->index),
|
||||
index_hash) {
|
||||
if (existing_entry->index == entry->index)
|
||||
/* If it's already in use, we continue searching. */
|
||||
goto search_unused_slot;
|
||||
}
|
||||
|
||||
/* Once we've found an unused slot, we lock it, and then double-check
|
||||
* that nobody else stole it from us.
|
||||
*/
|
||||
spin_lock_bh(&table->lock);
|
||||
hlist_for_each_entry_rcu_bh(existing_entry,
|
||||
index_bucket(table, entry->index),
|
||||
index_hash) {
|
||||
if (existing_entry->index == entry->index) {
|
||||
spin_unlock_bh(&table->lock);
|
||||
/* If it was stolen, we start over. */
|
||||
goto search_unused_slot;
|
||||
}
|
||||
}
|
||||
/* Otherwise, we know we have it exclusively (since we're locked),
|
||||
* so we insert.
|
||||
*/
|
||||
hlist_add_head_rcu(&entry->index_hash,
|
||||
index_bucket(table, entry->index));
|
||||
spin_unlock_bh(&table->lock);
|
||||
|
||||
rcu_read_unlock_bh();
|
||||
|
||||
return entry->index;
|
||||
}
|
||||
|
||||
bool wg_index_hashtable_replace(struct index_hashtable *table,
|
||||
struct index_hashtable_entry *old,
|
||||
struct index_hashtable_entry *new)
|
||||
{
|
||||
if (unlikely(hlist_unhashed(&old->index_hash)))
|
||||
return false;
|
||||
spin_lock_bh(&table->lock);
|
||||
new->index = old->index;
|
||||
hlist_replace_rcu(&old->index_hash, &new->index_hash);
|
||||
|
||||
/* Calling init here NULLs out index_hash, and in fact after this
|
||||
* function returns, it's theoretically possible for this to get
|
||||
* reinserted elsewhere. That means the RCU lookup below might either
|
||||
* terminate early or jump between buckets, in which case the packet
|
||||
* simply gets dropped, which isn't terrible.
|
||||
*/
|
||||
INIT_HLIST_NODE(&old->index_hash);
|
||||
spin_unlock_bh(&table->lock);
|
||||
return true;
|
||||
}
|
||||
|
||||
void wg_index_hashtable_remove(struct index_hashtable *table,
|
||||
struct index_hashtable_entry *entry)
|
||||
{
|
||||
spin_lock_bh(&table->lock);
|
||||
hlist_del_init_rcu(&entry->index_hash);
|
||||
spin_unlock_bh(&table->lock);
|
||||
}
|
||||
|
||||
/* Returns a strong reference to a entry->peer */
|
||||
struct index_hashtable_entry *
|
||||
wg_index_hashtable_lookup(struct index_hashtable *table,
|
||||
const enum index_hashtable_type type_mask,
|
||||
const __le32 index, struct wg_peer **peer)
|
||||
{
|
||||
struct index_hashtable_entry *iter_entry, *entry = NULL;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
hlist_for_each_entry_rcu_bh(iter_entry, index_bucket(table, index),
|
||||
index_hash) {
|
||||
if (iter_entry->index == index) {
|
||||
if (likely(iter_entry->type & type_mask))
|
||||
entry = iter_entry;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (likely(entry)) {
|
||||
entry->peer = wg_peer_get_maybe_zero(entry->peer);
|
||||
if (likely(entry->peer))
|
||||
*peer = entry->peer;
|
||||
else
|
||||
entry = NULL;
|
||||
}
|
||||
rcu_read_unlock_bh();
|
||||
return entry;
|
||||
}
|
||||
64
drivers/net/wireguard/peerlookup.h
Normal file
64
drivers/net/wireguard/peerlookup.h
Normal file
@@ -0,0 +1,64 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
||||
*/
|
||||
|
||||
#ifndef _WG_PEERLOOKUP_H
|
||||
#define _WG_PEERLOOKUP_H
|
||||
|
||||
#include "messages.h"
|
||||
|
||||
#include <linux/hashtable.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/siphash.h>
|
||||
|
||||
struct wg_peer;
|
||||
|
||||
struct pubkey_hashtable {
|
||||
/* TODO: move to rhashtable */
|
||||
DECLARE_HASHTABLE(hashtable, 11);
|
||||
siphash_key_t key;
|
||||
struct mutex lock;
|
||||
};
|
||||
|
||||
struct pubkey_hashtable *wg_pubkey_hashtable_alloc(void);
|
||||
void wg_pubkey_hashtable_add(struct pubkey_hashtable *table,
|
||||
struct wg_peer *peer);
|
||||
void wg_pubkey_hashtable_remove(struct pubkey_hashtable *table,
|
||||
struct wg_peer *peer);
|
||||
struct wg_peer *
|
||||
wg_pubkey_hashtable_lookup(struct pubkey_hashtable *table,
|
||||
const u8 pubkey[NOISE_PUBLIC_KEY_LEN]);
|
||||
|
||||
struct index_hashtable {
|
||||
/* TODO: move to rhashtable */
|
||||
DECLARE_HASHTABLE(hashtable, 13);
|
||||
spinlock_t lock;
|
||||
};
|
||||
|
||||
enum index_hashtable_type {
|
||||
INDEX_HASHTABLE_HANDSHAKE = 1U << 0,
|
||||
INDEX_HASHTABLE_KEYPAIR = 1U << 1
|
||||
};
|
||||
|
||||
struct index_hashtable_entry {
|
||||
struct wg_peer *peer;
|
||||
struct hlist_node index_hash;
|
||||
enum index_hashtable_type type;
|
||||
__le32 index;
|
||||
};
|
||||
|
||||
struct index_hashtable *wg_index_hashtable_alloc(void);
|
||||
__le32 wg_index_hashtable_insert(struct index_hashtable *table,
|
||||
struct index_hashtable_entry *entry);
|
||||
bool wg_index_hashtable_replace(struct index_hashtable *table,
|
||||
struct index_hashtable_entry *old,
|
||||
struct index_hashtable_entry *new);
|
||||
void wg_index_hashtable_remove(struct index_hashtable *table,
|
||||
struct index_hashtable_entry *entry);
|
||||
struct index_hashtable_entry *
|
||||
wg_index_hashtable_lookup(struct index_hashtable *table,
|
||||
const enum index_hashtable_type type_mask,
|
||||
const __le32 index, struct wg_peer **peer);
|
||||
|
||||
#endif /* _WG_PEERLOOKUP_H */
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user