Merge branch 'kvm-updates/2.6.34' of git://git.kernel.org/pub/scm/virt/kvm/kvm

* 'kvm-updates/2.6.34' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (145 commits)
  KVM: x86: Add KVM_CAP_X86_ROBUST_SINGLESTEP
  KVM: VMX: Update instruction length on intercepted BP
  KVM: Fix emulate_sys[call, enter, exit]()'s fault handling
  KVM: Fix segment descriptor loading
  KVM: Fix load_guest_segment_descriptor() to inject page fault
  KVM: x86 emulator: Forbid modifying CS segment register by mov instruction
  KVM: Convert kvm->requests_lock to raw_spinlock_t
  KVM: Convert i8254/i8259 locks to raw_spinlocks
  KVM: x86 emulator: disallow opcode 82 in 64-bit mode
  KVM: x86 emulator: code style cleanup
  KVM: Plan obsolescence of kernel allocated slots, paravirt mmu
  KVM: x86 emulator: Add LOCK prefix validity checking
  KVM: x86 emulator: Check CPL level during privilege instruction emulation
  KVM: x86 emulator: Fix popf emulation
  KVM: x86 emulator: Check IOPL level during io instruction emulation
  KVM: x86 emulator: fix memory access during x86 emulation
  KVM: x86 emulator: Add Virtual-8086 mode of emulation
  KVM: x86 emulator: Add group9 instruction decoding
  KVM: x86 emulator: Add group8 instruction decoding
  KVM: do not store wqh in irqfd
  ...

Trivial conflicts in Documentation/feature-removal-schedule.txt
This commit is contained in:
Linus Torvalds
2010-03-05 13:12:34 -08:00
74 changed files with 3772 additions and 1663 deletions
@@ -556,3 +556,35 @@ Why: udev fully replaces this special file system that only contains CAPI
NCCI TTY device nodes. User space (pppdcapiplugin) works without NCCI TTY device nodes. User space (pppdcapiplugin) works without
noticing the difference. noticing the difference.
Who: Jan Kiszka <jan.kiszka@web.de> Who: Jan Kiszka <jan.kiszka@web.de>
----------------------------
What: KVM memory aliases support
When: July 2010
Why: Memory aliasing support is used for speeding up guest vga access
through the vga windows.
Modern userspace no longer uses this feature, so it's just bitrotted
code and can be removed with no impact.
Who: Avi Kivity <avi@redhat.com>
----------------------------
What: KVM kernel-allocated memory slots
When: July 2010
Why: Since 2.6.25, kvm supports user-allocated memory slots, which are
much more flexible than kernel-allocated slots. All current userspace
supports the newer interface and this code can be removed with no
impact.
Who: Avi Kivity <avi@redhat.com>
----------------------------
What: KVM paravirt mmu host support
When: January 2011
Why: The paravirt mmu host support is slower than non-paravirt mmu, both
on newer and older hardware. It is already not exposed to the guest,
and kept only for live migration purposes.
Who: Avi Kivity <avi@redhat.com>
----------------------------
+6 -6
View File
@@ -23,12 +23,12 @@ of a virtual machine. The ioctls belong to three classes
Only run vcpu ioctls from the same thread that was used to create the Only run vcpu ioctls from the same thread that was used to create the
vcpu. vcpu.
2. File descritpors 2. File descriptors
The kvm API is centered around file descriptors. An initial The kvm API is centered around file descriptors. An initial
open("/dev/kvm") obtains a handle to the kvm subsystem; this handle open("/dev/kvm") obtains a handle to the kvm subsystem; this handle
can be used to issue system ioctls. A KVM_CREATE_VM ioctl on this can be used to issue system ioctls. A KVM_CREATE_VM ioctl on this
handle will create a VM file descripror which can be used to issue VM handle will create a VM file descriptor which can be used to issue VM
ioctls. A KVM_CREATE_VCPU ioctl on a VM fd will create a virtual cpu ioctls. A KVM_CREATE_VCPU ioctl on a VM fd will create a virtual cpu
and return a file descriptor pointing to it. Finally, ioctls on a vcpu and return a file descriptor pointing to it. Finally, ioctls on a vcpu
fd can be used to control the vcpu, including the important task of fd can be used to control the vcpu, including the important task of
@@ -643,7 +643,7 @@ Type: vm ioctl
Parameters: struct kvm_clock_data (in) Parameters: struct kvm_clock_data (in)
Returns: 0 on success, -1 on error Returns: 0 on success, -1 on error
Sets the current timestamp of kvmclock to the valued specific in its parameter. Sets the current timestamp of kvmclock to the value specified in its parameter.
In conjunction with KVM_GET_CLOCK, it is used to ensure monotonicity on scenarios In conjunction with KVM_GET_CLOCK, it is used to ensure monotonicity on scenarios
such as migration. such as migration.
@@ -795,11 +795,11 @@ Unused.
__u64 data_offset; /* relative to kvm_run start */ __u64 data_offset; /* relative to kvm_run start */
} io; } io;
If exit_reason is KVM_EXIT_IO_IN or KVM_EXIT_IO_OUT, then the vcpu has If exit_reason is KVM_EXIT_IO, then the vcpu has
executed a port I/O instruction which could not be satisfied by kvm. executed a port I/O instruction which could not be satisfied by kvm.
data_offset describes where the data is located (KVM_EXIT_IO_OUT) or data_offset describes where the data is located (KVM_EXIT_IO_OUT) or
where kvm expects application code to place the data for the next where kvm expects application code to place the data for the next
KVM_RUN invocation (KVM_EXIT_IO_IN). Data format is a patcked array. KVM_RUN invocation (KVM_EXIT_IO_IN). Data format is a packed array.
struct { struct {
struct kvm_debug_exit_arch arch; struct kvm_debug_exit_arch arch;
@@ -815,7 +815,7 @@ Unused.
__u8 is_write; __u8 is_write;
} mmio; } mmio;
If exit_reason is KVM_EXIT_MMIO or KVM_EXIT_IO_OUT, then the vcpu has If exit_reason is KVM_EXIT_MMIO, then the vcpu has
executed a memory-mapped I/O instruction which could not be satisfied executed a memory-mapped I/O instruction which could not be satisfied
by kvm. The 'data' member contains the written data if 'is_write' is by kvm. The 'data' member contains the written data if 'is_write' is
true, and should be filled by application code otherwise. true, and should be filled by application code otherwise.
+1 -1
View File
@@ -3173,7 +3173,7 @@ F: arch/x86/include/asm/svm.h
F: arch/x86/kvm/svm.c F: arch/x86/kvm/svm.c
KERNEL VIRTUAL MACHINE (KVM) FOR POWERPC KERNEL VIRTUAL MACHINE (KVM) FOR POWERPC
M: Hollis Blanchard <hollisb@us.ibm.com> M: Alexander Graf <agraf@suse.de>
L: kvm-ppc@vger.kernel.org L: kvm-ppc@vger.kernel.org
W: http://kvm.qumranet.com W: http://kvm.qumranet.com
S: Supported S: Supported
+1
View File
@@ -26,6 +26,7 @@ config KVM
select ANON_INODES select ANON_INODES
select HAVE_KVM_IRQCHIP select HAVE_KVM_IRQCHIP
select KVM_APIC_ARCHITECTURE select KVM_APIC_ARCHITECTURE
select KVM_MMIO
---help--- ---help---
Support hosting fully virtualized guest machines using hardware Support hosting fully virtualized guest machines using hardware
virtualization extensions. You will need a fairly recent virtualization extensions. You will need a fairly recent
+30 -20
View File
@@ -241,10 +241,10 @@ static int handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
return 0; return 0;
mmio: mmio:
if (p->dir) if (p->dir)
r = kvm_io_bus_read(&vcpu->kvm->mmio_bus, p->addr, r = kvm_io_bus_read(vcpu->kvm, KVM_MMIO_BUS, p->addr,
p->size, &p->data); p->size, &p->data);
else else
r = kvm_io_bus_write(&vcpu->kvm->mmio_bus, p->addr, r = kvm_io_bus_write(vcpu->kvm, KVM_MMIO_BUS, p->addr,
p->size, &p->data); p->size, &p->data);
if (r) if (r)
printk(KERN_ERR"kvm: No iodevice found! addr:%lx\n", p->addr); printk(KERN_ERR"kvm: No iodevice found! addr:%lx\n", p->addr);
@@ -636,12 +636,9 @@ static void kvm_vcpu_post_transition(struct kvm_vcpu *vcpu)
static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{ {
union context *host_ctx, *guest_ctx; union context *host_ctx, *guest_ctx;
int r; int r, idx;
/* idx = srcu_read_lock(&vcpu->kvm->srcu);
* down_read() may sleep and return with interrupts enabled
*/
down_read(&vcpu->kvm->slots_lock);
again: again:
if (signal_pending(current)) { if (signal_pending(current)) {
@@ -663,7 +660,7 @@ again:
if (r < 0) if (r < 0)
goto vcpu_run_fail; goto vcpu_run_fail;
up_read(&vcpu->kvm->slots_lock); srcu_read_unlock(&vcpu->kvm->srcu, idx);
kvm_guest_enter(); kvm_guest_enter();
/* /*
@@ -687,7 +684,7 @@ again:
kvm_guest_exit(); kvm_guest_exit();
preempt_enable(); preempt_enable();
down_read(&vcpu->kvm->slots_lock); idx = srcu_read_lock(&vcpu->kvm->srcu);
r = kvm_handle_exit(kvm_run, vcpu); r = kvm_handle_exit(kvm_run, vcpu);
@@ -697,10 +694,10 @@ again:
} }
out: out:
up_read(&vcpu->kvm->slots_lock); srcu_read_unlock(&vcpu->kvm->srcu, idx);
if (r > 0) { if (r > 0) {
kvm_resched(vcpu); kvm_resched(vcpu);
down_read(&vcpu->kvm->slots_lock); idx = srcu_read_lock(&vcpu->kvm->srcu);
goto again; goto again;
} }
@@ -971,7 +968,7 @@ long kvm_arch_vm_ioctl(struct file *filp,
goto out; goto out;
r = kvm_setup_default_irq_routing(kvm); r = kvm_setup_default_irq_routing(kvm);
if (r) { if (r) {
kfree(kvm->arch.vioapic); kvm_ioapic_destroy(kvm);
goto out; goto out;
} }
break; break;
@@ -1377,12 +1374,14 @@ static void free_kvm(struct kvm *kvm)
static void kvm_release_vm_pages(struct kvm *kvm) static void kvm_release_vm_pages(struct kvm *kvm)
{ {
struct kvm_memslots *slots;
struct kvm_memory_slot *memslot; struct kvm_memory_slot *memslot;
int i, j; int i, j;
unsigned long base_gfn; unsigned long base_gfn;
for (i = 0; i < kvm->nmemslots; i++) { slots = rcu_dereference(kvm->memslots);
memslot = &kvm->memslots[i]; for (i = 0; i < slots->nmemslots; i++) {
memslot = &slots->memslots[i];
base_gfn = memslot->base_gfn; base_gfn = memslot->base_gfn;
for (j = 0; j < memslot->npages; j++) { for (j = 0; j < memslot->npages; j++) {
@@ -1405,6 +1404,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
kfree(kvm->arch.vioapic); kfree(kvm->arch.vioapic);
kvm_release_vm_pages(kvm); kvm_release_vm_pages(kvm);
kvm_free_physmem(kvm); kvm_free_physmem(kvm);
cleanup_srcu_struct(&kvm->srcu);
free_kvm(kvm); free_kvm(kvm);
} }
@@ -1576,15 +1576,15 @@ out:
return r; return r;
} }
int kvm_arch_set_memory_region(struct kvm *kvm, int kvm_arch_prepare_memory_region(struct kvm *kvm,
struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *memslot,
struct kvm_memory_slot old, struct kvm_memory_slot old,
struct kvm_userspace_memory_region *mem,
int user_alloc) int user_alloc)
{ {
unsigned long i; unsigned long i;
unsigned long pfn; unsigned long pfn;
int npages = mem->memory_size >> PAGE_SHIFT; int npages = memslot->npages;
struct kvm_memory_slot *memslot = &kvm->memslots[mem->slot];
unsigned long base_gfn = memslot->base_gfn; unsigned long base_gfn = memslot->base_gfn;
if (base_gfn + npages > (KVM_MAX_MEM_SIZE >> PAGE_SHIFT)) if (base_gfn + npages > (KVM_MAX_MEM_SIZE >> PAGE_SHIFT))
@@ -1608,6 +1608,14 @@ int kvm_arch_set_memory_region(struct kvm *kvm,
return 0; return 0;
} }
void kvm_arch_commit_memory_region(struct kvm *kvm,
struct kvm_userspace_memory_region *mem,
struct kvm_memory_slot old,
int user_alloc)
{
return;
}
void kvm_arch_flush_shadow(struct kvm *kvm) void kvm_arch_flush_shadow(struct kvm *kvm)
{ {
kvm_flush_remote_tlbs(kvm); kvm_flush_remote_tlbs(kvm);
@@ -1802,7 +1810,7 @@ static int kvm_ia64_sync_dirty_log(struct kvm *kvm,
if (log->slot >= KVM_MEMORY_SLOTS) if (log->slot >= KVM_MEMORY_SLOTS)
goto out; goto out;
memslot = &kvm->memslots[log->slot]; memslot = &kvm->memslots->memslots[log->slot];
r = -ENOENT; r = -ENOENT;
if (!memslot->dirty_bitmap) if (!memslot->dirty_bitmap)
goto out; goto out;
@@ -1827,6 +1835,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
struct kvm_memory_slot *memslot; struct kvm_memory_slot *memslot;
int is_dirty = 0; int is_dirty = 0;
mutex_lock(&kvm->slots_lock);
spin_lock(&kvm->arch.dirty_log_lock); spin_lock(&kvm->arch.dirty_log_lock);
r = kvm_ia64_sync_dirty_log(kvm, log); r = kvm_ia64_sync_dirty_log(kvm, log);
@@ -1840,12 +1849,13 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
/* If nothing is dirty, don't bother messing with page tables. */ /* If nothing is dirty, don't bother messing with page tables. */
if (is_dirty) { if (is_dirty) {
kvm_flush_remote_tlbs(kvm); kvm_flush_remote_tlbs(kvm);
memslot = &kvm->memslots[log->slot]; memslot = &kvm->memslots->memslots[log->slot];
n = ALIGN(memslot->npages, BITS_PER_LONG) / 8; n = ALIGN(memslot->npages, BITS_PER_LONG) / 8;
memset(memslot->dirty_bitmap, 0, n); memset(memslot->dirty_bitmap, 0, n);
} }
r = 0; r = 0;
out: out:
mutex_unlock(&kvm->slots_lock);
spin_unlock(&kvm->arch.dirty_log_lock); spin_unlock(&kvm->arch.dirty_log_lock);
return r; return r;
} }
+13 -15
View File
@@ -75,7 +75,7 @@ static void set_pal_result(struct kvm_vcpu *vcpu,
struct exit_ctl_data *p; struct exit_ctl_data *p;
p = kvm_get_exit_data(vcpu); p = kvm_get_exit_data(vcpu);
if (p && p->exit_reason == EXIT_REASON_PAL_CALL) { if (p->exit_reason == EXIT_REASON_PAL_CALL) {
p->u.pal_data.ret = result; p->u.pal_data.ret = result;
return ; return ;
} }
@@ -87,7 +87,7 @@ static void set_sal_result(struct kvm_vcpu *vcpu,
struct exit_ctl_data *p; struct exit_ctl_data *p;
p = kvm_get_exit_data(vcpu); p = kvm_get_exit_data(vcpu);
if (p && p->exit_reason == EXIT_REASON_SAL_CALL) { if (p->exit_reason == EXIT_REASON_SAL_CALL) {
p->u.sal_data.ret = result; p->u.sal_data.ret = result;
return ; return ;
} }
@@ -322,7 +322,7 @@ static u64 kvm_get_pal_call_index(struct kvm_vcpu *vcpu)
struct exit_ctl_data *p; struct exit_ctl_data *p;
p = kvm_get_exit_data(vcpu); p = kvm_get_exit_data(vcpu);
if (p && (p->exit_reason == EXIT_REASON_PAL_CALL)) if (p->exit_reason == EXIT_REASON_PAL_CALL)
index = p->u.pal_data.gr28; index = p->u.pal_data.gr28;
return index; return index;
@@ -646,18 +646,16 @@ static void kvm_get_sal_call_data(struct kvm_vcpu *vcpu, u64 *in0, u64 *in1,
p = kvm_get_exit_data(vcpu); p = kvm_get_exit_data(vcpu);
if (p) { if (p->exit_reason == EXIT_REASON_SAL_CALL) {
if (p->exit_reason == EXIT_REASON_SAL_CALL) { *in0 = p->u.sal_data.in0;
*in0 = p->u.sal_data.in0; *in1 = p->u.sal_data.in1;
*in1 = p->u.sal_data.in1; *in2 = p->u.sal_data.in2;
*in2 = p->u.sal_data.in2; *in3 = p->u.sal_data.in3;
*in3 = p->u.sal_data.in3; *in4 = p->u.sal_data.in4;
*in4 = p->u.sal_data.in4; *in5 = p->u.sal_data.in5;
*in5 = p->u.sal_data.in5; *in6 = p->u.sal_data.in6;
*in6 = p->u.sal_data.in6; *in7 = p->u.sal_data.in7;
*in7 = p->u.sal_data.in7; return ;
return ;
}
} }
*in0 = 0; *in0 = 0;
} }
+2 -2
View File
@@ -316,8 +316,8 @@ void emulate_io_inst(struct kvm_vcpu *vcpu, u64 padr, u64 ma)
return; return;
} else { } else {
inst_type = -1; inst_type = -1;
panic_vm(vcpu, "Unsupported MMIO access instruction! \ panic_vm(vcpu, "Unsupported MMIO access instruction! "
Bunld[0]=0x%lx, Bundle[1]=0x%lx\n", "Bunld[0]=0x%lx, Bundle[1]=0x%lx\n",
bundle.i64[0], bundle.i64[1]); bundle.i64[0], bundle.i64[1]);
} }
+2 -2
View File
@@ -1639,8 +1639,8 @@ void vcpu_set_psr(struct kvm_vcpu *vcpu, unsigned long val)
* Otherwise panic * Otherwise panic
*/ */
if (val & (IA64_PSR_PK | IA64_PSR_IS | IA64_PSR_VM)) if (val & (IA64_PSR_PK | IA64_PSR_IS | IA64_PSR_VM))
panic_vm(vcpu, "Only support guests with vpsr.pk =0 \ panic_vm(vcpu, "Only support guests with vpsr.pk =0 "
& vpsr.is=0\n"); "& vpsr.is=0\n");
/* /*
* For those IA64_PSR bits: id/da/dd/ss/ed/ia * For those IA64_PSR bits: id/da/dd/ss/ed/ia
+6
View File
@@ -97,4 +97,10 @@
#define RESUME_HOST RESUME_FLAG_HOST #define RESUME_HOST RESUME_FLAG_HOST
#define RESUME_HOST_NV (RESUME_FLAG_HOST|RESUME_FLAG_NV) #define RESUME_HOST_NV (RESUME_FLAG_HOST|RESUME_FLAG_NV)
#define KVM_GUEST_MODE_NONE 0
#define KVM_GUEST_MODE_GUEST 1
#define KVM_GUEST_MODE_SKIP 2
#define KVM_INST_FETCH_FAILED -1
#endif /* __POWERPC_KVM_ASM_H__ */ #endif /* __POWERPC_KVM_ASM_H__ */
+9 -2
View File
@@ -22,7 +22,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <asm/kvm_ppc.h> #include <asm/kvm_book3s_64_asm.h>
struct kvmppc_slb { struct kvmppc_slb {
u64 esid; u64 esid;
@@ -33,7 +33,8 @@ struct kvmppc_slb {
bool Ks; bool Ks;
bool Kp; bool Kp;
bool nx; bool nx;
bool large; bool large; /* PTEs are 16MB */
bool tb; /* 1TB segment */
bool class; bool class;
}; };
@@ -69,6 +70,7 @@ struct kvmppc_sid_map {
struct kvmppc_vcpu_book3s { struct kvmppc_vcpu_book3s {
struct kvm_vcpu vcpu; struct kvm_vcpu vcpu;
struct kvmppc_book3s_shadow_vcpu shadow_vcpu;
struct kvmppc_sid_map sid_map[SID_MAP_NUM]; struct kvmppc_sid_map sid_map[SID_MAP_NUM];
struct kvmppc_slb slb[64]; struct kvmppc_slb slb[64];
struct { struct {
@@ -89,6 +91,7 @@ struct kvmppc_vcpu_book3s {
u64 vsid_next; u64 vsid_next;
u64 vsid_max; u64 vsid_max;
int context_id; int context_id;
ulong prog_flags; /* flags to inject when giving a 700 trap */
}; };
#define CONTEXT_HOST 0 #define CONTEXT_HOST 0
@@ -119,6 +122,10 @@ extern void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat,
extern u32 kvmppc_trampoline_lowmem; extern u32 kvmppc_trampoline_lowmem;
extern u32 kvmppc_trampoline_enter; extern u32 kvmppc_trampoline_enter;
extern void kvmppc_rmcall(ulong srr0, ulong srr1);
extern void kvmppc_load_up_fpu(void);
extern void kvmppc_load_up_altivec(void);
extern void kvmppc_load_up_vsx(void);
static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu) static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
{ {
@@ -20,6 +20,8 @@
#ifndef __ASM_KVM_BOOK3S_ASM_H__ #ifndef __ASM_KVM_BOOK3S_ASM_H__
#define __ASM_KVM_BOOK3S_ASM_H__ #define __ASM_KVM_BOOK3S_ASM_H__
#ifdef __ASSEMBLY__
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
#include <asm/kvm_asm.h> #include <asm/kvm_asm.h>
@@ -55,4 +57,20 @@ kvmppc_resume_\intno:
#endif /* CONFIG_KVM_BOOK3S_64_HANDLER */ #endif /* CONFIG_KVM_BOOK3S_64_HANDLER */
#else /*__ASSEMBLY__ */
struct kvmppc_book3s_shadow_vcpu {
ulong gpr[14];
u32 cr;
u32 xer;
ulong host_r1;
ulong host_r2;
ulong handler;
ulong scratch0;
ulong scratch1;
ulong vmhandler;
};
#endif /*__ASSEMBLY__ */
#endif /* __ASM_KVM_BOOK3S_ASM_H__ */ #endif /* __ASM_KVM_BOOK3S_ASM_H__ */
+3
View File
@@ -52,9 +52,12 @@ struct kvmppc_vcpu_e500 {
u32 mas5; u32 mas5;
u32 mas6; u32 mas6;
u32 mas7; u32 mas7;
u32 l1csr0;
u32 l1csr1; u32 l1csr1;
u32 hid0; u32 hid0;
u32 hid1; u32 hid1;
u32 tlb0cfg;
u32 tlb1cfg;
struct kvm_vcpu vcpu; struct kvm_vcpu vcpu;
}; };
+21 -2
View File
@@ -167,23 +167,40 @@ struct kvm_vcpu_arch {
ulong trampoline_lowmem; ulong trampoline_lowmem;
ulong trampoline_enter; ulong trampoline_enter;
ulong highmem_handler; ulong highmem_handler;
ulong rmcall;
ulong host_paca_phys; ulong host_paca_phys;
struct kvmppc_mmu mmu; struct kvmppc_mmu mmu;
#endif #endif
u64 fpr[32];
ulong gpr[32]; ulong gpr[32];
u64 fpr[32];
u32 fpscr;
#ifdef CONFIG_ALTIVEC
vector128 vr[32];
vector128 vscr;
#endif
#ifdef CONFIG_VSX
u64 vsr[32];
#endif
ulong pc; ulong pc;
u32 cr;
ulong ctr; ulong ctr;
ulong lr; ulong lr;
#ifdef CONFIG_BOOKE
ulong xer; ulong xer;
u32 cr;
#endif
ulong msr; ulong msr;
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
ulong shadow_msr; ulong shadow_msr;
ulong shadow_srr1;
ulong hflags; ulong hflags;
ulong guest_owned_ext;
#endif #endif
u32 mmucr; u32 mmucr;
ulong sprg0; ulong sprg0;
@@ -242,6 +259,8 @@ struct kvm_vcpu_arch {
#endif #endif
ulong fault_dear; ulong fault_dear;
ulong fault_esr; ulong fault_esr;
ulong queued_dear;
ulong queued_esr;
gpa_t paddr_accessed; gpa_t paddr_accessed;
u8 io_gpr; /* GPR used as IO source/target */ u8 io_gpr; /* GPR used as IO source/target */
+82 -1
View File
@@ -28,6 +28,9 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/kvm_types.h> #include <linux/kvm_types.h>
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#ifdef CONFIG_PPC_BOOK3S
#include <asm/kvm_book3s.h>
#endif
enum emulation_result { enum emulation_result {
EMULATE_DONE, /* no further processing */ EMULATE_DONE, /* no further processing */
@@ -80,8 +83,9 @@ extern void kvmppc_core_vcpu_put(struct kvm_vcpu *vcpu);
extern void kvmppc_core_deliver_interrupts(struct kvm_vcpu *vcpu); extern void kvmppc_core_deliver_interrupts(struct kvm_vcpu *vcpu);
extern int kvmppc_core_pending_dec(struct kvm_vcpu *vcpu); extern int kvmppc_core_pending_dec(struct kvm_vcpu *vcpu);
extern void kvmppc_core_queue_program(struct kvm_vcpu *vcpu); extern void kvmppc_core_queue_program(struct kvm_vcpu *vcpu, ulong flags);
extern void kvmppc_core_queue_dec(struct kvm_vcpu *vcpu); extern void kvmppc_core_queue_dec(struct kvm_vcpu *vcpu);
extern void kvmppc_core_dequeue_dec(struct kvm_vcpu *vcpu);
extern void kvmppc_core_queue_external(struct kvm_vcpu *vcpu, extern void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
struct kvm_interrupt *irq); struct kvm_interrupt *irq);
@@ -95,4 +99,81 @@ extern void kvmppc_booke_exit(void);
extern void kvmppc_core_destroy_mmu(struct kvm_vcpu *vcpu); extern void kvmppc_core_destroy_mmu(struct kvm_vcpu *vcpu);
#ifdef CONFIG_PPC_BOOK3S
/* We assume we're always acting on the current vcpu */
static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
{
if ( num < 14 ) {
get_paca()->shadow_vcpu.gpr[num] = val;
to_book3s(vcpu)->shadow_vcpu.gpr[num] = val;
} else
vcpu->arch.gpr[num] = val;
}
static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
{
if ( num < 14 )
return get_paca()->shadow_vcpu.gpr[num];
else
return vcpu->arch.gpr[num];
}
static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
{
get_paca()->shadow_vcpu.cr = val;
to_book3s(vcpu)->shadow_vcpu.cr = val;
}
static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
{
return get_paca()->shadow_vcpu.cr;
}
static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
{
get_paca()->shadow_vcpu.xer = val;
to_book3s(vcpu)->shadow_vcpu.xer = val;
}
static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
{
return get_paca()->shadow_vcpu.xer;
}
#else
static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
{
vcpu->arch.gpr[num] = val;
}
static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
{
return vcpu->arch.gpr[num];
}
static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
{
vcpu->arch.cr = val;
}
static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
{
return vcpu->arch.cr;
}
static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
{
vcpu->arch.xer = val;
}
static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
{
return vcpu->arch.xer;
}
#endif
#endif /* __POWERPC_KVM_PPC_H__ */ #endif /* __POWERPC_KVM_PPC_H__ */
+5
View File
@@ -19,6 +19,9 @@
#include <asm/mmu.h> #include <asm/mmu.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/exception-64e.h> #include <asm/exception-64e.h>
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
#include <asm/kvm_book3s_64_asm.h>
#endif
register struct paca_struct *local_paca asm("r13"); register struct paca_struct *local_paca asm("r13");
@@ -135,6 +138,8 @@ struct paca_struct {
u64 esid; u64 esid;
u64 vsid; u64 vsid;
} kvm_slb[64]; /* guest SLB */ } kvm_slb[64]; /* guest SLB */
/* We use this to store guest state in */
struct kvmppc_book3s_shadow_vcpu shadow_vcpu;
u8 kvm_slb_max; /* highest used guest slb entry */ u8 kvm_slb_max; /* highest used guest slb entry */
u8 kvm_in_guest; /* are we inside the guest? */ u8 kvm_in_guest; /* are we inside the guest? */
#endif #endif
+4
View File
@@ -426,6 +426,10 @@
#define SRR1_WAKEMT 0x00280000 /* mtctrl */ #define SRR1_WAKEMT 0x00280000 /* mtctrl */
#define SRR1_WAKEDEC 0x00180000 /* Decrementer interrupt */ #define SRR1_WAKEDEC 0x00180000 /* Decrementer interrupt */
#define SRR1_WAKETHERM 0x00100000 /* Thermal management interrupt */ #define SRR1_WAKETHERM 0x00100000 /* Thermal management interrupt */
#define SRR1_PROGFPE 0x00100000 /* Floating Point Enabled */
#define SRR1_PROGPRIV 0x00040000 /* Privileged instruction */
#define SRR1_PROGTRAP 0x00020000 /* Trap */
#define SRR1_PROGADDR 0x00010000 /* SRR0 contains subsequent addr */
#define SPRN_HSRR0 0x13A /* Save/Restore Register 0 */ #define SPRN_HSRR0 0x13A /* Save/Restore Register 0 */
#define SPRN_HSRR1 0x13B /* Save/Restore Register 1 */ #define SPRN_HSRR1 0x13B /* Save/Restore Register 1 */
+30 -3
View File
@@ -194,6 +194,30 @@ int main(void)
DEFINE(PACA_KVM_IN_GUEST, offsetof(struct paca_struct, kvm_in_guest)); DEFINE(PACA_KVM_IN_GUEST, offsetof(struct paca_struct, kvm_in_guest));
DEFINE(PACA_KVM_SLB, offsetof(struct paca_struct, kvm_slb)); DEFINE(PACA_KVM_SLB, offsetof(struct paca_struct, kvm_slb));
DEFINE(PACA_KVM_SLB_MAX, offsetof(struct paca_struct, kvm_slb_max)); DEFINE(PACA_KVM_SLB_MAX, offsetof(struct paca_struct, kvm_slb_max));
DEFINE(PACA_KVM_CR, offsetof(struct paca_struct, shadow_vcpu.cr));
DEFINE(PACA_KVM_XER, offsetof(struct paca_struct, shadow_vcpu.xer));
DEFINE(PACA_KVM_R0, offsetof(struct paca_struct, shadow_vcpu.gpr[0]));
DEFINE(PACA_KVM_R1, offsetof(struct paca_struct, shadow_vcpu.gpr[1]));
DEFINE(PACA_KVM_R2, offsetof(struct paca_struct, shadow_vcpu.gpr[2]));
DEFINE(PACA_KVM_R3, offsetof(struct paca_struct, shadow_vcpu.gpr[3]));
DEFINE(PACA_KVM_R4, offsetof(struct paca_struct, shadow_vcpu.gpr[4]));
DEFINE(PACA_KVM_R5, offsetof(struct paca_struct, shadow_vcpu.gpr[5]));
DEFINE(PACA_KVM_R6, offsetof(struct paca_struct, shadow_vcpu.gpr[6]));
DEFINE(PACA_KVM_R7, offsetof(struct paca_struct, shadow_vcpu.gpr[7]));
DEFINE(PACA_KVM_R8, offsetof(struct paca_struct, shadow_vcpu.gpr[8]));
DEFINE(PACA_KVM_R9, offsetof(struct paca_struct, shadow_vcpu.gpr[9]));
DEFINE(PACA_KVM_R10, offsetof(struct paca_struct, shadow_vcpu.gpr[10]));
DEFINE(PACA_KVM_R11, offsetof(struct paca_struct, shadow_vcpu.gpr[11]));
DEFINE(PACA_KVM_R12, offsetof(struct paca_struct, shadow_vcpu.gpr[12]));
DEFINE(PACA_KVM_R13, offsetof(struct paca_struct, shadow_vcpu.gpr[13]));
DEFINE(PACA_KVM_HOST_R1, offsetof(struct paca_struct, shadow_vcpu.host_r1));
DEFINE(PACA_KVM_HOST_R2, offsetof(struct paca_struct, shadow_vcpu.host_r2));
DEFINE(PACA_KVM_VMHANDLER, offsetof(struct paca_struct,
shadow_vcpu.vmhandler));
DEFINE(PACA_KVM_SCRATCH0, offsetof(struct paca_struct,
shadow_vcpu.scratch0));
DEFINE(PACA_KVM_SCRATCH1, offsetof(struct paca_struct,
shadow_vcpu.scratch1));
#endif #endif
#endif /* CONFIG_PPC64 */ #endif /* CONFIG_PPC64 */
@@ -389,8 +413,6 @@ int main(void)
DEFINE(VCPU_HOST_PID, offsetof(struct kvm_vcpu, arch.host_pid)); DEFINE(VCPU_HOST_PID, offsetof(struct kvm_vcpu, arch.host_pid));
DEFINE(VCPU_GPRS, offsetof(struct kvm_vcpu, arch.gpr)); DEFINE(VCPU_GPRS, offsetof(struct kvm_vcpu, arch.gpr));
DEFINE(VCPU_LR, offsetof(struct kvm_vcpu, arch.lr)); DEFINE(VCPU_LR, offsetof(struct kvm_vcpu, arch.lr));
DEFINE(VCPU_CR, offsetof(struct kvm_vcpu, arch.cr));
DEFINE(VCPU_XER, offsetof(struct kvm_vcpu, arch.xer));
DEFINE(VCPU_CTR, offsetof(struct kvm_vcpu, arch.ctr)); DEFINE(VCPU_CTR, offsetof(struct kvm_vcpu, arch.ctr));
DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.pc)); DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.pc));
DEFINE(VCPU_MSR, offsetof(struct kvm_vcpu, arch.msr)); DEFINE(VCPU_MSR, offsetof(struct kvm_vcpu, arch.msr));
@@ -411,11 +433,16 @@ int main(void)
DEFINE(VCPU_HOST_R2, offsetof(struct kvm_vcpu, arch.host_r2)); DEFINE(VCPU_HOST_R2, offsetof(struct kvm_vcpu, arch.host_r2));
DEFINE(VCPU_HOST_MSR, offsetof(struct kvm_vcpu, arch.host_msr)); DEFINE(VCPU_HOST_MSR, offsetof(struct kvm_vcpu, arch.host_msr));
DEFINE(VCPU_SHADOW_MSR, offsetof(struct kvm_vcpu, arch.shadow_msr)); DEFINE(VCPU_SHADOW_MSR, offsetof(struct kvm_vcpu, arch.shadow_msr));
DEFINE(VCPU_SHADOW_SRR1, offsetof(struct kvm_vcpu, arch.shadow_srr1));
DEFINE(VCPU_TRAMPOLINE_LOWMEM, offsetof(struct kvm_vcpu, arch.trampoline_lowmem)); DEFINE(VCPU_TRAMPOLINE_LOWMEM, offsetof(struct kvm_vcpu, arch.trampoline_lowmem));
DEFINE(VCPU_TRAMPOLINE_ENTER, offsetof(struct kvm_vcpu, arch.trampoline_enter)); DEFINE(VCPU_TRAMPOLINE_ENTER, offsetof(struct kvm_vcpu, arch.trampoline_enter));
DEFINE(VCPU_HIGHMEM_HANDLER, offsetof(struct kvm_vcpu, arch.highmem_handler)); DEFINE(VCPU_HIGHMEM_HANDLER, offsetof(struct kvm_vcpu, arch.highmem_handler));
DEFINE(VCPU_RMCALL, offsetof(struct kvm_vcpu, arch.rmcall));
DEFINE(VCPU_HFLAGS, offsetof(struct kvm_vcpu, arch.hflags)); DEFINE(VCPU_HFLAGS, offsetof(struct kvm_vcpu, arch.hflags));
#endif #else
DEFINE(VCPU_CR, offsetof(struct kvm_vcpu, arch.cr));
DEFINE(VCPU_XER, offsetof(struct kvm_vcpu, arch.xer));
#endif /* CONFIG_PPC64 */
#endif #endif
#ifdef CONFIG_44x #ifdef CONFIG_44x
DEFINE(PGD_T_LOG2, PGD_T_LOG2); DEFINE(PGD_T_LOG2, PGD_T_LOG2);
+1
View File
@@ -107,6 +107,7 @@ EXPORT_SYMBOL(giveup_altivec);
#endif /* CONFIG_ALTIVEC */ #endif /* CONFIG_ALTIVEC */
#ifdef CONFIG_VSX #ifdef CONFIG_VSX
EXPORT_SYMBOL(giveup_vsx); EXPORT_SYMBOL(giveup_vsx);
EXPORT_SYMBOL_GPL(__giveup_vsx);
#endif /* CONFIG_VSX */ #endif /* CONFIG_VSX */
#ifdef CONFIG_SPE #ifdef CONFIG_SPE
EXPORT_SYMBOL(giveup_spe); EXPORT_SYMBOL(giveup_spe);
+13 -12
View File
@@ -65,13 +65,14 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
*/ */
switch (dcrn) { switch (dcrn) {
case DCRN_CPR0_CONFIG_ADDR: case DCRN_CPR0_CONFIG_ADDR:
vcpu->arch.gpr[rt] = vcpu->arch.cpr0_cfgaddr; kvmppc_set_gpr(vcpu, rt, vcpu->arch.cpr0_cfgaddr);
break; break;
case DCRN_CPR0_CONFIG_DATA: case DCRN_CPR0_CONFIG_DATA:
local_irq_disable(); local_irq_disable();
mtdcr(DCRN_CPR0_CONFIG_ADDR, mtdcr(DCRN_CPR0_CONFIG_ADDR,
vcpu->arch.cpr0_cfgaddr); vcpu->arch.cpr0_cfgaddr);
vcpu->arch.gpr[rt] = mfdcr(DCRN_CPR0_CONFIG_DATA); kvmppc_set_gpr(vcpu, rt,
mfdcr(DCRN_CPR0_CONFIG_DATA));
local_irq_enable(); local_irq_enable();
break; break;
default: default:
@@ -93,11 +94,11 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
/* emulate some access in kernel */ /* emulate some access in kernel */
switch (dcrn) { switch (dcrn) {
case DCRN_CPR0_CONFIG_ADDR: case DCRN_CPR0_CONFIG_ADDR:
vcpu->arch.cpr0_cfgaddr = vcpu->arch.gpr[rs]; vcpu->arch.cpr0_cfgaddr = kvmppc_get_gpr(vcpu, rs);
break; break;
default: default:
run->dcr.dcrn = dcrn; run->dcr.dcrn = dcrn;
run->dcr.data = vcpu->arch.gpr[rs]; run->dcr.data = kvmppc_get_gpr(vcpu, rs);
run->dcr.is_write = 1; run->dcr.is_write = 1;
vcpu->arch.dcr_needed = 1; vcpu->arch.dcr_needed = 1;
kvmppc_account_exit(vcpu, DCR_EXITS); kvmppc_account_exit(vcpu, DCR_EXITS);
@@ -146,13 +147,13 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
switch (sprn) { switch (sprn) {
case SPRN_PID: case SPRN_PID:
kvmppc_set_pid(vcpu, vcpu->arch.gpr[rs]); break; kvmppc_set_pid(vcpu, kvmppc_get_gpr(vcpu, rs)); break;
case SPRN_MMUCR: case SPRN_MMUCR:
vcpu->arch.mmucr = vcpu->arch.gpr[rs]; break; vcpu->arch.mmucr = kvmppc_get_gpr(vcpu, rs); break;
case SPRN_CCR0: case SPRN_CCR0:
vcpu->arch.ccr0 = vcpu->arch.gpr[rs]; break; vcpu->arch.ccr0 = kvmppc_get_gpr(vcpu, rs); break;
case SPRN_CCR1: case SPRN_CCR1:
vcpu->arch.ccr1 = vcpu->arch.gpr[rs]; break; vcpu->arch.ccr1 = kvmppc_get_gpr(vcpu, rs); break;
default: default:
emulated = kvmppc_booke_emulate_mtspr(vcpu, sprn, rs); emulated = kvmppc_booke_emulate_mtspr(vcpu, sprn, rs);
} }
@@ -167,13 +168,13 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
switch (sprn) { switch (sprn) {
case SPRN_PID: case SPRN_PID:
vcpu->arch.gpr[rt] = vcpu->arch.pid; break; kvmppc_set_gpr(vcpu, rt, vcpu->arch.pid); break;
case SPRN_MMUCR: case SPRN_MMUCR:
vcpu->arch.gpr[rt] = vcpu->arch.mmucr; break; kvmppc_set_gpr(vcpu, rt, vcpu->arch.mmucr); break;
case SPRN_CCR0: case SPRN_CCR0:
vcpu->arch.gpr[rt] = vcpu->arch.ccr0; break; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ccr0); break;
case SPRN_CCR1: case SPRN_CCR1:
vcpu->arch.gpr[rt] = vcpu->arch.ccr1; break; kvmppc_set_gpr(vcpu, rt, vcpu->arch.ccr1); break;
default: default:
emulated = kvmppc_booke_emulate_mfspr(vcpu, sprn, rt); emulated = kvmppc_booke_emulate_mfspr(vcpu, sprn, rt);
} }
+11 -9
View File
@@ -439,7 +439,7 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws)
struct kvmppc_44x_tlbe *tlbe; struct kvmppc_44x_tlbe *tlbe;
unsigned int gtlb_index; unsigned int gtlb_index;
gtlb_index = vcpu->arch.gpr[ra]; gtlb_index = kvmppc_get_gpr(vcpu, ra);
if (gtlb_index > KVM44x_GUEST_TLB_SIZE) { if (gtlb_index > KVM44x_GUEST_TLB_SIZE) {
printk("%s: index %d\n", __func__, gtlb_index); printk("%s: index %d\n", __func__, gtlb_index);
kvmppc_dump_vcpu(vcpu); kvmppc_dump_vcpu(vcpu);
@@ -455,15 +455,15 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws)
switch (ws) { switch (ws) {
case PPC44x_TLB_PAGEID: case PPC44x_TLB_PAGEID:
tlbe->tid = get_mmucr_stid(vcpu); tlbe->tid = get_mmucr_stid(vcpu);
tlbe->word0 = vcpu->arch.gpr[rs]; tlbe->word0 = kvmppc_get_gpr(vcpu, rs);
break; break;
case PPC44x_TLB_XLAT: case PPC44x_TLB_XLAT:
tlbe->word1 = vcpu->arch.gpr[rs]; tlbe->word1 = kvmppc_get_gpr(vcpu, rs);
break; break;
case PPC44x_TLB_ATTRIB: case PPC44x_TLB_ATTRIB:
tlbe->word2 = vcpu->arch.gpr[rs]; tlbe->word2 = kvmppc_get_gpr(vcpu, rs);
break; break;
default: default:
@@ -500,18 +500,20 @@ int kvmppc_44x_emul_tlbsx(struct kvm_vcpu *vcpu, u8 rt, u8 ra, u8 rb, u8 rc)
unsigned int as = get_mmucr_sts(vcpu); unsigned int as = get_mmucr_sts(vcpu);
unsigned int pid = get_mmucr_stid(vcpu); unsigned int pid = get_mmucr_stid(vcpu);
ea = vcpu->arch.gpr[rb]; ea = kvmppc_get_gpr(vcpu, rb);
if (ra) if (ra)
ea += vcpu->arch.gpr[ra]; ea += kvmppc_get_gpr(vcpu, ra);
gtlb_index = kvmppc_44x_tlb_index(vcpu, ea, pid, as); gtlb_index = kvmppc_44x_tlb_index(vcpu, ea, pid, as);
if (rc) { if (rc) {
u32 cr = kvmppc_get_cr(vcpu);
if (gtlb_index < 0) if (gtlb_index < 0)
vcpu->arch.cr &= ~0x20000000; kvmppc_set_cr(vcpu, cr & ~0x20000000);
else else
vcpu->arch.cr |= 0x20000000; kvmppc_set_cr(vcpu, cr | 0x20000000);
} }
vcpu->arch.gpr[rt] = gtlb_index; kvmppc_set_gpr(vcpu, rt, gtlb_index);
kvmppc_set_exit_type(vcpu, EMULATED_TLBSX_EXITS); kvmppc_set_exit_type(vcpu, EMULATED_TLBSX_EXITS);
return EMULATE_DONE; return EMULATE_DONE;

Some files were not shown because too many files have changed in this diff Show More