There's no difference in supporting S3 and S4 for virtio devices: the
vqs have to be re-created as the device has to be assumed to be reset at
restore-time. Since S4 already handles this situation, we can directly
use the same code and callbacks for S3 support.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
restore_common() was shared between restore and thaw callbacks. With
thaw gone, we don't need restore_common() anymore.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
The thaw operation was used by the balloon driver, but after the last
commit there's no reason to have separate thaw and restore callbacks.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
There's no reason stats update after restore can't work. If a host
requested for stats, and before servicing the request, the guest entered
S4, upon restore, the stats request can still be processed and sent off
to the host.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Remove all #inclusions of asm/system.h preparatory to splitting and killing
it. Performed with the following command:
perl -p -i -e 's!^#\s*include\s*<asm/system[.]h>.*\n!!' `grep -Irl '^#\s*include\s*<asm/system[.]h>' *`
Signed-off-by: David Howells <dhowells@redhat.com>
commit e562966dba added support for S4 to
the balloon driver. The freeze function did nothing to free the pages,
since reclaiming the pages from the host to immediately give them back
(if S4 was successful) seemed wasteful. Also, if S4 wasn't successful,
the guest would have to re-fill the balloon. On restore, the pages were
supposed to be marked freed and the free page counters were incremented
to reflect the balloon was totally deflated.
However, this wasn't done right. The pages that were earlier taken away
from the guest during a balloon inflation operation were just shown as
used pages after a successful restore from S4. Just a fancy way of
leaking lots of memory.
Instead of trying that, just leak the balloon on freeze and fill it on
restore/thaw paths. This works properly now. The optimisation to not
leak can be added later on after a bit of refactoring of the code.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Use virtio_mb() to make sure the available index to be exposed before
checking the the avail event. Otherwise we may get stale value of
avail event in guest and never kick the host after.
Note: this fixes a bug introduced by ee7cd8981e.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: stable@kernel.org
Handling balloon hibernate / restore is tricky. If the balloon was
inflated before going into the hibernation state, upon resume, the host
will not have any memory of that. Any pages that were passed on to the
host earlier would most likely be invalid, and the host will have to
re-balloon to the previous value to get in the pre-hibernate state.
So the only sane thing for the guest to do here is to discard all the
pages that were put in the balloon. When to discard the pages is the
next question.
One solution is to deflate the balloon just before writing the image to
the disk (in the freeze() PM callback). However, asking for pages from
the host just to discard them immediately after seems wasteful of
resources. Hence, it makes sense to do this by just fudging our
counters soon after wakeup. This means we don't deflate the balloon
before sleep, and also don't put unnecessary pressure on the host.
This also helps in the thaw case: if the freeze fails for whatever
reason, the balloon should continue to remain in the inflated state.
This was tested by issuing 'swapoff -a' and trying to go into the S4
state. That fails, and the balloon stays inflated, as expected. Both
the host and the guest are happy.
Finally, in the restore() callback, we empty the list of pages that were
previously given off to the host, add the appropriate number of pages to
the totalram_pages counter, reset the num_pages counter to 0, and
all is fine.
As a last step, delete the vqs on the freeze callback to prepare for
hibernation, and re-create them in the restore and thaw callbacks to
resume normal operation.
The kthread doesn't race with any operations here, since it's frozen
before the freeze() call and is thawed after the thaw() and restore()
callbacks, so we're safe with that.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Handle thaw, restore and freeze notifications from the PM core. Expose
these to individual virtio drivers that can quiesce and resume vq
operations. For drivers not implementing the thaw() method, use the
restore method instead.
These functions also save device-specific data so that the device can be
put in pre-suspend state after resume, and disable and enable the PCI
device in the freeze and resume functions, respectively.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The older PM API doesn't have a way to get notifications on hibernate
events. Switch to the newer one that gives us those notifications.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Under the existing #ifdef DEBUG, check that they don't have more than
1/10 of a second between an add_buf() and a
virtqueue_notify()/virtqueue_kick_prepare() call.
We could get false positives on a really busy system, but good for
development.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
A virtio driver does virtqueue_add_buf() multiple times before finally
calling virtqueue_kick(); previously we only exposed the added buffers
in the virtqueue_kick() call. This means we don't need a memory
barrier in virtqueue_add_buf(), but it reduces concurrency as the
device (ie. host) can't see the buffers until the kick.
In the unusual (but now possible) case where a driver does add_buf()
and get_buf() without doing a kick, we do need to insert one before
our counter wraps. Otherwise we could wrap num_added, and later on
not realize that we have passed the marker where we should have
kicked.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we know vq->vring.num is a power of 2, modulus is lazy (it's asserted
in vring_new_virtqueue()).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Based on patch by Christoph for virtio_blk speedup:
Split virtqueue_kick to be able to do the actual notification
outside the lock protecting the virtqueue. This patch was
originally done by Stefan Hajnoczi, but I can't find the
original one anymore and had to recreated it from memory.
Pointers to the original or corrections for the commit message
are welcome.
Stefan's patch was here:
a6d06644e3http://www.spinics.net/lists/linux-virtualization/msg14616.html
Third time's the charm!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Remove wrapper functions. This makes the allocation type explicit in
all callers; I used GPF_KERNEL where it seemed obvious, left it at
GFP_ATOMIC otherwise.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Reviewed-by: Christoph Hellwig <hch@lst.de>
The old documentation is left over from when we used a structure with
strategy pointers.
And move the documentation to the C file as per kernel practice.
Though I disagree...
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Reviewed-by: Christoph Hellwig <hch@lst.de>
We were cheating with our barriers; using the smp ones rather than the
real device ones. That was fine, until rpmsg came along, which is
used to talk to a real device (a non-SMP CPU).
Unfortunately, just putting back the real barriers (reverting
d57ed95d) causes a performance regression on virtio-pci. In
particular, Amos reports netbench's TCP_RR over virtio_net CPU
utilization increased up to 35% while throughput went down by up to
14%.
By comparison, this branch is in the noise.
Reference: https://lkml.org/lkml/2011/12/11/22
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
virtio pci device reset actually just does an I/O
write, which in PCI is really posted, that is it
can complete on CPU before the device has received it.
Further, interrupts might have been pending on
another CPU, so device callback might get invoked after reset.
This conflicts with how drivers use reset, which is typically:
reset
unregister
a callback running after reset completed can race with
unregister, potentially leading to use after free bugs.
Fix by flushing out the write, and flushing pending interrupts.
This assumes that device is never reset from
its vq/config callbacks, or in parallel with being
added/removed, document this assumption.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fix this compile error on s390:
CC [M] drivers/virtio/virtio_mmio.o
drivers/virtio/virtio_mmio.c: In function 'vm_get_features':
drivers/virtio/virtio_mmio.c:107:2: error: implicit declaration of function 'writel'
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>