Commit Graph

86 Commits

Author SHA1 Message Date
Arun Sharma
60063497a9 atomic: use <linux/atomic.h>
This allows us to move duplicated code in <asm/atomic.h>
(atomic_inc_not_zero() for now) to <linux/atomic.h>

Signed-off-by: Arun Sharma <asharma@fb.com>
Reviewed-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: David Miller <davem@davemloft.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-07-26 16:49:47 -07:00
Shirley Ma
9e380825ab vhost: handle wrap around in # of bufs math
The meth for calculating the # of outstanding buffers gives
incorrect results when vq->upend_idx wraps around zero.
Fix that.

Signed-off-by: Shirley Ma <xma@us.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-21 10:48:27 +03:00
Michael S. Tsirkin
c047e5f317 vhost-net: update used ring on backend change
On backend change, we flushed out outstanding skbs
but forgot to update the used ring, so that
done entries were left in the ubuf_info ring.
As a result we lose heads or complete incorrect ones,
crashing the guest or leaking memory.
Fix by updating the used ring.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-21 10:23:31 +03:00
Michael S. Tsirkin
b834226b04 vhost: optimize interrupt enable/disable
As we now only update used ring after enabling
the backend, we can write flags with __put_user:
as that's done on data path, it matters.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-19 17:17:28 +03:00
Michael S. Tsirkin
75fd9edc10 vhost: fix zcopy reference counting
Fix get/put refcount imbalance with zero copy,
which caused qemu to hang forever on guest driver unload.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-19 13:28:34 +03:00
Jason Wang
2723feaa8e vhost: set log when updating used flags or avail event
We need to log writes when updating used flags and avail event
fields.  Otherwise the guest may see a stale value after migration and
miss notifying the host.

Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-19 13:28:34 +03:00
Jason Wang
f59281dafb vhost: init used ring after backend was set
Move the used ring initialization after backend was set. This
makes it possible to disable the backend and tweak the used ring,
then restart. This will also make it possible to log the used ring
write correctly.

Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-07-19 13:28:34 +03:00
Michael S. Tsirkin
bab632d69e vhost: vhost TX zero-copy support
>From: Shirley Ma <mashirle@us.ibm.com>

This adds experimental zero copy support in vhost-net,
disabled by default. To enable, set
experimental_zcopytx module option to 1.

This patch maintains the outstanding userspace buffers in the
sequence it is delivered to vhost. The outstanding userspace buffers
will be marked as done once the lower device buffers DMA has finished.
This is monitored through last reference of kfree_skb callback. Two
buffer indices are used for this purpose.

The vhost-net device passes the userspace buffers info to lower device
skb through message control. DMA done status check and guest
notification are handled by handle_tx: in the worst case is all buffers
in the vq are in pending/done status, so we need to notify guest to
release DMA done buffers first before we get any new buffers from the
vq.

One known problem is that if the guest stops submitting
buffers, buffers might never get used until some
further action, e.g. device reset. This does not
seem to affect linux guests.

Signed-off-by: Shirley <xma@us.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-07-18 10:42:32 -07:00
Michael S. Tsirkin
8ea8cf89e1 vhost: support event index
Support the new event index feature. When acked,
utilize it to reduce the # of interrupts sent to the guest.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2011-05-30 11:14:15 +09:30
Rob Landley
6151658751 Correct occurrences of
- Documentation/kvm/ to Documentation/virtual/kvm
- Documentation/uml/ to Documentation/virtual/uml
- Documentation/lguest/ to Documentation/virtual/lguest
throughout the kernel source tree.

Signed-off-by: Rob Landley <rob@landley.net>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
2011-05-06 09:27:55 -07:00
Michael S. Tsirkin
de4d768a42 vhost-net: remove unlocked use of receive_queue
Use of skb_queue_empty(&sock->sk->sk_receive_queue)
without taking the sk_receive_queue.lock is unsafe
or useless. Take it out.

Reported-by:  Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-03-13 23:08:19 +02:00
Jason Wang
783e398854 vhost: lock receive queue, not the socket
vhost takes a sock lock to try and prevent
the skb from being pulled from the receive queue
after skb_peek.  However this is not the right lock to use for that,
sk_receive_queue.lock is. Fix that up.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-03-13 23:08:04 +02:00
Jason Wang
94249369e9 vhost-net: Unify the code of mergeable and big buffer handling
Codes duplication were found between the handling of mergeable and big
buffers, so this patch tries to unify them. This could be easily done
by adding a quota to the get_rx_bufs() which is used to limit the
number of buffers it returns (for mergeable buffer, the quota is
simply UIO_MAXIOV, for big buffers, the quota is just 1), and then the
previous handle_rx_mergeable() could be resued also for big buffers.

Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-03-13 17:00:10 +02:00
Jason Wang
cfbdab9513 vhost-net: check the support of mergeable buffer outside the receive loop
No need to check the support of mergeable buffer inside the recevie
loop as the whole handle_rx()_xx is in the read critical region.  So
this patch move it ahead of the receiving loop.

Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-03-13 16:57:30 +02:00
Michael S. Tsirkin
fcc042a280 vhost: copy_from_user -> __copy_from_user
copy_from_user is pretty high on perf top profile,
replacing it with __copy_from_user helps.
It's also safe because we do access_ok checks during setup.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-03-08 18:03:05 +02:00
Krishna Kumar
d47effe1be vhost: Cleanup vhost.c and net.c
Minor cleanup of vhost.c and net.c to match coding style.

Signed-off-by: Krishna Kumar <krkumar2@in.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-03-08 18:02:47 +02:00
Michael S. Tsirkin
5e18247b02 vhost: rcu annotation fixup
When built with rcu checks enabled, vhost triggers
bogus warnings as vhost features are read without
dev->mutex sometimes, and private pointer is read
with our kind of rcu where work serves as a
read side critical section.

Fixing it properly is not trivial.
Disable the warnings by stubbing out the checks for now.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-02-01 16:48:46 +02:00
Michael S. Tsirkin
0174b0c30a vhost: fix signed/unsigned comparison
To detect that a sequence number is done, we are doing math on unsigned
integers so the result is unsigned too. Not what was intended for the <=
comparison. The result is user stuck forever in flush call.
Convert to int to fix this.

Further, get rid of ({}) to make code clearer.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2011-01-10 10:03:39 +02:00
David S. Miller
9fe146aef4 Merge branch 'vhost-net-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost 2010-12-14 11:33:23 -08:00
Michael S. Tsirkin
71ccc212e5 vhost test module
This adds a test module for vhost infrastructure.
Intentionally not tied to kbuild to prevent people
from installing and loading it accidentally.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2010-12-09 16:00:21 +02:00
Michael S. Tsirkin
28831ee60b vhost: better variable name in logging
We really store a page offset in write_address,
so rename it write_page to avoid confusion.

Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2010-12-09 16:00:10 +02:00
Michael S. Tsirkin
3bf9be40ff vhost: correctly set bits of dirty pages
Fix two bugs in dirty page logging:
When counting pages we should increase address by 1 instead of
VHOST_PAGE_SIZE. Make log_write() correctly process requests
that cross pages with write_address not starting at page boundary.

Reported-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2010-12-09 15:39:17 +02:00
Jason Wang
a290aec88a vhost: fix typos in comment
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2010-12-09 15:39:15 +02:00
Michael S. Tsirkin
bf5e0bd27f vhost: remove unused include
vhost.c does not need to know about sockets,
don't include sock.h

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2010-12-09 15:39:13 +02:00
Michael S. Tsirkin
11cd1a8b8c vhost/net: fix rcu check usage
Incorrect rcu check was used as rcu isn't done
under mutex here. Force check to 1 for now,
to stop it from complaining.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2010-11-25 11:29:16 +02:00