Implement the new CCW_CMD_SET_IND_ADAPTER command and try to enable
adapter interrupts for every device on the first startup. If the host
does not support adapter interrupts, fall back to normal I/O interrupts.
virtio-ccw adapter interrupts use the same isc as normal I/O subchannels
and share a summary indicator for all devices sharing the same indicator
area.
Indicator bits for the individual virtqueues may be contained in the same
indicator area for different devices.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Add airq_iv_alloc and airq_iv_free to allocate and free consecutive
ranges of irqs from the interrupt vector.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
We can use kvm_get_vcpu() now and don't need the
local_int array in the floating_int struct anymore.
This also means we don't have to hold the float_int.lock
in some places.
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
When SIGP SET_PREFIX is called with an illegal CPU id, it must return
the condition code 3 ("not operational") instead of 1. Also fixed the
order in which the checks are done - CC3 has a higher priority than CC1.
And while we're at it, this patch also get rid of the floating interrupt
lock here by using kvm_get_vcpu() to get the local_int struct of the
destination CPU.
Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
We don't need to loop over all cpus to get the number of
vcpus. Let's use the available counter online_vcpus instead.
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
For migration/reset we want to expose the guest breaking event
address register to userspace. Lets use ONE_REG for that purpose.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
commit d208c79d63 (KVM: s390: Enable
the LPP facility for guests) enabled the LPP instruction for guests.
We should expose the program parameter as a pseudo register for
migration/reset etc. Lets also reset this value on initial CPU
reset.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Parameter conversion within the system call wrappers is only needed
for parameters which differ in size and have a size of eight bytes on
64 bit.
For system call parameters with a size of less than eight byte the
called system call itself will perform parameter conversion anyway.
So we can save the double conversion of e.g. int parameters.
The only types which need to be converted are therefore pointer and
(unsigned) long parameters.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Instead of explicitly changing compat system call parameters from e.g.
unsigned long to compat_ulong_t let the COMPAT_SYSCALL_WRAP macros
automatically detect (unsigned) long parameters and zero and sign
extend them automatically.
The resulting binary is completely identical.
In addition add a sys_[system call name] prototype for each system call
wrapper. This will cause compile errors if the prototype does not match
the prototype in include/linux/syscall.h.
Therefore we should now always get the correct zero and sign extension
of system call parameters. Pointers are handled like before.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
The compat syscall wrappers for sync_file_range and fallocate merged 32 bit
parameters into 64 bit parameters. Therefore they did more than just the
usual zero and/or sign extension of system call parameters.
So convert these two wrappers to full s390 specific compat sytem calls.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Introduce a new compat_wrap.c file which contains the s390 specific compat
system call wrappers.
The s390 specific system call wrappers only perform sign, zero and pointer
conversion of system call arguments before actually calling the non-compat
system call.
Therefore introduce COMPAT_SYSCALL_WRAPx macros which generate C code that
is nearly identical to the assembly code. This has the advantage that the
compile will generate correct code, and we avoid the frequent copy-paste
errors seen in the compat_wrapper.S file.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Convert s390 specific system calls to to the new COMPAT_SYSCALL_DEFINE macro.
This allows us to get rid of the assembly compat wrappers.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
If an architecture has futex_atomic_cmpxchg_inatomic() implemented and there
is no runtime check necessary, allow to skip the test within futex_init().
This allows to get rid of some code which would always give the same result,
and also allows the compiler to optimize a couple of if statements away.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Finn Thain <fthain@telegraphics.com.au>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Link: http://lkml.kernel.org/r/20140302120947.GA3641@osiris
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Commit "kvm: Record the preemption status of vcpus using preempt notifiers"
caused a performance regression on s390. It turned out that in the case that
if a former sleeping cpu, that was woken up, this cpu is not a yield candidate
since it gave up the cpu voluntarily. To retain this candiate its preempted
flag is set during wakeup and interrupt delivery time.
Significant performance measurement work and code analysis to solve this
issue was provided by Mao Chuan Li and his team in Beijing.
Signed-off-by: Michael Mueller <mimu@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
A vcpu is defined to be runnable if an interrupt is pending.
Signed-off-by: Michael Mueller <mimu@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The memset() within csum_partial_copy_from_user() is rather pointless since
copy_from_user() already cleared the rest of the destination buffer if an
exception happened.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
There is no user left, so remove it.
It was also potentially broken, since the function didn't clear destination
memory if copy_from_user() failed. Which would allow for information leaks.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The maximum usable buffer size of the s390 debug feature (when using
the sprintf_view) is 11 * sizeof(long) (1 pointer for the format
string + 10 arguments). When a larger buffer size is specified the
additional memory is unused and wasted per debug entry. So reducing
the buffer size to its maximum (or to the actual buffer size used)
will make more precious debug feature space usable.
For pci_msg, chsc_msg, and cio_crw we use the additional usable dbf
space to reduce the number of allocated pages.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add airq_iv_alloc and airq_iv_free to allocate and free consecutive
ranges of irqs from the interrupt vector.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add the pgtable_pmd_page_ctor/pgtable_pmd_page_dtor calls to the pmd
allocation and free functions and enable ARCH_ENABLE_SPLIT_PMD_PTLOCK
for 64 bit.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Fix some numbers in the comments describing the layout of the bit maps.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The guest page state needs to be reset to stable for all pages
on initial program load via diagnose 0x308.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This patch enables Collaborative Memory Management (CMM) for kvm
on s390. CMM allows the guest to inform the host about page usage
(see arch/s390/mm/cmm.c). The host uses this information to avoid
swapping in unused pages in the page fault handler. Further, a CPU
provided list of unused invalid pages is processed to reclaim swap
space of not yet accessed unused pages.
[ Martin Schwidefsky: patch reordering and cleanup ]
Signed-off-by: Konstantin Weitz <konstantin.weitz@gmail.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Git commit 050eef364a "[S390] fix tlb flushing vs. concurrent
/proc accesses" introduced the attach counter to avoid using the
mm_users value to decide between IPTE for every PTE and lazy TLB
flushing with IDTE. That fixed the problem with mm_users but it
introduced another subtle race, fortunately one that is very hard
to hit.
The background is the requirement of the architecture that a valid
PTE may not be changed while it can be used concurrently by another
cpu. The decision between IPTE and lazy TLB flushing needs to be
done while the PTE is still valid. Now if the virtual cpu is
temporarily stopped after the decision to use lazy TLB flushing but
before the invalid bit of the PTE has been set, another cpu can attach
the mm, find that flush_mm is set, do the IDTE, return to userspace,
and recreate a TLB that uses the PTE in question. When the first,
stopped cpu continues it will change the PTE while it is attached on
another cpu. The first cpu will do another IDTE shortly after the
modification of the PTE which makes the race window quite short.
To fix this race the CPU that wants to attach the address space of a
user space thread needs to wait for the end of the PTE modification.
The number of concurrent TLB flushers for an mm is tracked in the
upper 16 bits of the attach_count and finish_arch_post_lock_switch
is used to wait for the end of the flush operation if required.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The uaccesspt kernel parameter allows to enforce using the uaccess page
table walk variant. This is mainly for debugging purposes, so this mode
can also be enabled on machines which support the mvcos instruction.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Remove another leftover from the time when we supported running
user space in either home or primary address space.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
MACHINE_HAS_MVCOS is used exactly once when the machine is brought up.
There is no need to cache the flag in the machine_flags.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The types 'size_t' and 'unsigned long' have been used randomly for the
uaccess functions. This looks rather confusing.
So let's change all functions to use unsigned long instead and get rid
of size_t in order to have a consistent interface.
The only exception is strncpy_from_user() which uses 'long' since it
may return a signed value (-EFAULT).
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>