Merge branches 'release', 'bugzilla-12011', 'bugzilla-12632', 'misc' and 'suspend' into release
This commit is contained in:
commit
5acfac5a64
315 changed files with 5015 additions and 1538 deletions
1
CREDITS
1
CREDITS
|
@ -2166,7 +2166,6 @@ D: Initial implementation of VC's, pty's and select()
|
|||
|
||||
N: Pavel Machek
|
||||
E: pavel@ucw.cz
|
||||
E: pavel@suse.cz
|
||||
D: Softcursor for vga, hypertech cdrom support, vcsa bugfix, nbd
|
||||
D: sun4/330 port, capabilities for elf, speedup for rm on ext2, USB,
|
||||
D: work on suspend-to-ram/disk, killing duplicates from ioctl32
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
What: /sys/firmware/memmap/
|
||||
Date: June 2008
|
||||
Contact: Bernhard Walle <bwalle@suse.de>
|
||||
Contact: Bernhard Walle <bernhard.walle@gmx.de>
|
||||
Description:
|
||||
On all platforms, the firmware provides a memory map which the
|
||||
kernel reads. The resources from that memory map are registered
|
||||
|
|
|
@ -93,7 +93,7 @@ the PCI Express Port Bus driver from loading a service driver.
|
|||
|
||||
int pcie_port_service_register(struct pcie_port_service_driver *new)
|
||||
|
||||
This API replaces the Linux Driver Model's pci_module_init API. A
|
||||
This API replaces the Linux Driver Model's pci_register_driver API. A
|
||||
service driver should always calls pcie_port_service_register at
|
||||
module init. Note that after service driver being loaded, calls
|
||||
such as pci_enable_device(dev) and pci_set_master(dev) are no longer
|
||||
|
|
|
@ -252,10 +252,8 @@ cgroup file system directories.
|
|||
When a task is moved from one cgroup to another, it gets a new
|
||||
css_set pointer - if there's an already existing css_set with the
|
||||
desired collection of cgroups then that group is reused, else a new
|
||||
css_set is allocated. Note that the current implementation uses a
|
||||
linear search to locate an appropriate existing css_set, so isn't
|
||||
very efficient. A future version will use a hash table for better
|
||||
performance.
|
||||
css_set is allocated. The appropriate existing css_set is located by
|
||||
looking into a hash table.
|
||||
|
||||
To allow access from a cgroup to the css_sets (and hence tasks)
|
||||
that comprise it, a set of cg_cgroup_link objects form a lattice;
|
||||
|
|
|
@ -142,7 +142,7 @@ into the rest of the kernel, none in performance critical paths:
|
|||
- in fork and exit, to attach and detach a task from its cpuset.
|
||||
- in sched_setaffinity, to mask the requested CPUs by what's
|
||||
allowed in that tasks cpuset.
|
||||
- in sched.c migrate_all_tasks(), to keep migrating tasks within
|
||||
- in sched.c migrate_live_tasks(), to keep migrating tasks within
|
||||
the CPUs allowed by their cpuset, if possible.
|
||||
- in the mbind and set_mempolicy system calls, to mask the requested
|
||||
Memory Nodes by what's allowed in that tasks cpuset.
|
||||
|
@ -175,6 +175,10 @@ files describing that cpuset:
|
|||
- mem_exclusive flag: is memory placement exclusive?
|
||||
- mem_hardwall flag: is memory allocation hardwalled
|
||||
- memory_pressure: measure of how much paging pressure in cpuset
|
||||
- memory_spread_page flag: if set, spread page cache evenly on allowed nodes
|
||||
- memory_spread_slab flag: if set, spread slab cache evenly on allowed nodes
|
||||
- sched_load_balance flag: if set, load balance within CPUs on that cpuset
|
||||
- sched_relax_domain_level: the searching range when migrating tasks
|
||||
|
||||
In addition, the root cpuset only has the following file:
|
||||
- memory_pressure_enabled flag: compute memory_pressure?
|
||||
|
@ -252,7 +256,7 @@ is causing.
|
|||
|
||||
This is useful both on tightly managed systems running a wide mix of
|
||||
submitted jobs, which may choose to terminate or re-prioritize jobs that
|
||||
are trying to use more memory than allowed on the nodes assigned them,
|
||||
are trying to use more memory than allowed on the nodes assigned to them,
|
||||
and with tightly coupled, long running, massively parallel scientific
|
||||
computing jobs that will dramatically fail to meet required performance
|
||||
goals if they start to use more memory than allowed to them.
|
||||
|
@ -378,7 +382,7 @@ as cpusets and sched_setaffinity.
|
|||
The algorithmic cost of load balancing and its impact on key shared
|
||||
kernel data structures such as the task list increases more than
|
||||
linearly with the number of CPUs being balanced. So the scheduler
|
||||
has support to partition the systems CPUs into a number of sched
|
||||
has support to partition the systems CPUs into a number of sched
|
||||
domains such that it only load balances within each sched domain.
|
||||
Each sched domain covers some subset of the CPUs in the system;
|
||||
no two sched domains overlap; some CPUs might not be in any sched
|
||||
|
@ -485,17 +489,22 @@ of CPUs allowed to a cpuset having 'sched_load_balance' enabled.
|
|||
The internal kernel cpuset to scheduler interface passes from the
|
||||
cpuset code to the scheduler code a partition of the load balanced
|
||||
CPUs in the system. This partition is a set of subsets (represented
|
||||
as an array of cpumask_t) of CPUs, pairwise disjoint, that cover all
|
||||
the CPUs that must be load balanced.
|
||||
as an array of struct cpumask) of CPUs, pairwise disjoint, that cover
|
||||
all the CPUs that must be load balanced.
|
||||
|
||||
Whenever the 'sched_load_balance' flag changes, or CPUs come or go
|
||||
from a cpuset with this flag enabled, or a cpuset with this flag
|
||||
enabled is removed, the cpuset code builds a new such partition and
|
||||
passes it to the scheduler sched domain setup code, to have the sched
|
||||
domains rebuilt as necessary.
|
||||
The cpuset code builds a new such partition and passes it to the
|
||||
scheduler sched domain setup code, to have the sched domains rebuilt
|
||||
as necessary, whenever:
|
||||
- the 'sched_load_balance' flag of a cpuset with non-empty CPUs changes,
|
||||
- or CPUs come or go from a cpuset with this flag enabled,
|
||||
- or 'sched_relax_domain_level' value of a cpuset with non-empty CPUs
|
||||
and with this flag enabled changes,
|
||||
- or a cpuset with non-empty CPUs and with this flag enabled is removed,
|
||||
- or a cpu is offlined/onlined.
|
||||
|
||||
This partition exactly defines what sched domains the scheduler should
|
||||
setup - one sched domain for each element (cpumask_t) in the partition.
|
||||
setup - one sched domain for each element (struct cpumask) in the
|
||||
partition.
|
||||
|
||||
The scheduler remembers the currently active sched domain partitions.
|
||||
When the scheduler routine partition_sched_domains() is invoked from
|
||||
|
@ -559,7 +568,7 @@ domain, the largest value among those is used. Be careful, if one
|
|||
requests 0 and others are -1 then 0 is used.
|
||||
|
||||
Note that modifying this file will have both good and bad effects,
|
||||
and whether it is acceptable or not will be depend on your situation.
|
||||
and whether it is acceptable or not depends on your situation.
|
||||
Don't modify this file if you are not sure.
|
||||
|
||||
If your situation is:
|
||||
|
@ -600,19 +609,15 @@ to allocate a page of memory for that task.
|
|||
|
||||
If a cpuset has its 'cpus' modified, then each task in that cpuset
|
||||
will have its allowed CPU placement changed immediately. Similarly,
|
||||
if a tasks pid is written to a cpusets 'tasks' file, in either its
|
||||
current cpuset or another cpuset, then its allowed CPU placement is
|
||||
changed immediately. If such a task had been bound to some subset
|
||||
of its cpuset using the sched_setaffinity() call, the task will be
|
||||
allowed to run on any CPU allowed in its new cpuset, negating the
|
||||
affect of the prior sched_setaffinity() call.
|
||||
if a tasks pid is written to another cpusets 'tasks' file, then its
|
||||
allowed CPU placement is changed immediately. If such a task had been
|
||||
bound to some subset of its cpuset using the sched_setaffinity() call,
|
||||
the task will be allowed to run on any CPU allowed in its new cpuset,
|
||||
negating the effect of the prior sched_setaffinity() call.
|
||||
|
||||
In summary, the memory placement of a task whose cpuset is changed is
|
||||
updated by the kernel, on the next allocation of a page for that task,
|
||||
but the processor placement is not updated, until that tasks pid is
|
||||
rewritten to the 'tasks' file of its cpuset. This is done to avoid
|
||||
impacting the scheduler code in the kernel with a check for changes
|
||||
in a tasks processor placement.
|
||||
and the processor placement is updated immediately.
|
||||
|
||||
Normally, once a page is allocated (given a physical page
|
||||
of main memory) then that page stays on whatever node it
|
||||
|
@ -681,10 +686,14 @@ and then start a subshell 'sh' in that cpuset:
|
|||
# The next line should display '/Charlie'
|
||||
cat /proc/self/cpuset
|
||||
|
||||
In the future, a C library interface to cpusets will likely be
|
||||
available. For now, the only way to query or modify cpusets is
|
||||
via the cpuset file system, using the various cd, mkdir, echo, cat,
|
||||
rmdir commands from the shell, or their equivalent from C.
|
||||
There are ways to query or modify cpusets:
|
||||
- via the cpuset file system directly, using the various cd, mkdir, echo,
|
||||
cat, rmdir commands from the shell, or their equivalent from C.
|
||||
- via the C library libcpuset.
|
||||
- via the C library libcgroup.
|
||||
(http://sourceforge.net/proects/libcg/)
|
||||
- via the python application cset.
|
||||
(http://developer.novell.com/wiki/index.php/Cpuset)
|
||||
|
||||
The sched_setaffinity calls can also be done at the shell prompt using
|
||||
SGI's runon or Robert Love's taskset. The mbind and set_mempolicy
|
||||
|
@ -756,7 +765,7 @@ mount -t cpuset X /dev/cpuset
|
|||
|
||||
is equivalent to
|
||||
|
||||
mount -t cgroup -ocpuset X /dev/cpuset
|
||||
mount -t cgroup -ocpuset,noprefix X /dev/cpuset
|
||||
echo "/sbin/cpuset_release_agent" > /dev/cpuset/release_agent
|
||||
|
||||
2.2 Adding/removing cpus
|
||||
|
|
101
Documentation/hwmon/hpfall.c
Normal file
101
Documentation/hwmon/hpfall.c
Normal file
|
@ -0,0 +1,101 @@
|
|||
/* Disk protection for HP machines.
|
||||
*
|
||||
* Copyright 2008 Eric Piel
|
||||
* Copyright 2009 Pavel Machek <pavel@suse.cz>
|
||||
*
|
||||
* GPLv2.
|
||||
*/
|
||||
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <unistd.h>
|
||||
#include <fcntl.h>
|
||||
#include <sys/stat.h>
|
||||
#include <sys/types.h>
|
||||
#include <string.h>
|
||||
#include <stdint.h>
|
||||
#include <errno.h>
|
||||
#include <signal.h>
|
||||
|
||||
void write_int(char *path, int i)
|
||||
{
|
||||
char buf[1024];
|
||||
int fd = open(path, O_RDWR);
|
||||
if (fd < 0) {
|
||||
perror("open");
|
||||
exit(1);
|
||||
}
|
||||
sprintf(buf, "%d", i);
|
||||
if (write(fd, buf, strlen(buf)) != strlen(buf)) {
|
||||
perror("write");
|
||||
exit(1);
|
||||
}
|
||||
close(fd);
|
||||
}
|
||||
|
||||
void set_led(int on)
|
||||
{
|
||||
write_int("/sys/class/leds/hp::hddprotect/brightness", on);
|
||||
}
|
||||
|
||||
void protect(int seconds)
|
||||
{
|
||||
write_int("/sys/block/sda/device/unload_heads", seconds*1000);
|
||||
}
|
||||
|
||||
int on_ac(void)
|
||||
{
|
||||
// /sys/class/power_supply/AC0/online
|
||||
}
|
||||
|
||||
int lid_open(void)
|
||||
{
|
||||
// /proc/acpi/button/lid/LID/state
|
||||
}
|
||||
|
||||
void ignore_me(void)
|
||||
{
|
||||
protect(0);
|
||||
set_led(0);
|
||||
|
||||
}
|
||||
|
||||
int main(int argc, char* argv[])
|
||||
{
|
||||
int fd, ret;
|
||||
|
||||
fd = open("/dev/freefall", O_RDONLY);
|
||||
if (fd < 0) {
|
||||
perror("open");
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
|
||||
signal(SIGALRM, ignore_me);
|
||||
|
||||
for (;;) {
|
||||
unsigned char count;
|
||||
|
||||
ret = read(fd, &count, sizeof(count));
|
||||
alarm(0);
|
||||
if ((ret == -1) && (errno == EINTR)) {
|
||||
/* Alarm expired, time to unpark the heads */
|
||||
continue;
|
||||
}
|
||||
|
||||
if (ret != sizeof(count)) {
|
||||
perror("read");
|
||||
break;
|
||||
}
|
||||
|
||||
protect(21);
|
||||
set_led(1);
|
||||
if (1 || on_ac() || lid_open()) {
|
||||
alarm(2);
|
||||
} else {
|
||||
alarm(20);
|
||||
}
|
||||
}
|
||||
|
||||
close(fd);
|
||||
return EXIT_SUCCESS;
|
||||
}
|
|
@ -33,6 +33,14 @@ rate - reports the sampling rate of the accelerometer device in HZ
|
|||
This driver also provides an absolute input class device, allowing
|
||||
the laptop to act as a pinball machine-esque joystick.
|
||||
|
||||
Another feature of the driver is misc device called "freefall" that
|
||||
acts similar to /dev/rtc and reacts on free-fall interrupts received
|
||||
from the device. It supports blocking operations, poll/select and
|
||||
fasync operation modes. You must read 1 bytes from the device. The
|
||||
result is number of free-fall interrupts since the last successful
|
||||
read (or 255 if number of interrupts would not fit).
|
||||
|
||||
|
||||
Axes orientation
|
||||
----------------
|
||||
|
||||
|
|
|
@ -78,12 +78,10 @@ to view your kernel log and look for "mmiotrace has lost events" warning. If
|
|||
events were lost, the trace is incomplete. You should enlarge the buffers and
|
||||
try again. Buffers are enlarged by first seeing how large the current buffers
|
||||
are:
|
||||
$ cat /debug/tracing/trace_entries
|
||||
$ cat /debug/tracing/buffer_size_kb
|
||||
gives you a number. Approximately double this number and write it back, for
|
||||
instance:
|
||||
$ echo 0 > /debug/tracing/tracing_enabled
|
||||
$ echo 128000 > /debug/tracing/trace_entries
|
||||
$ echo 1 > /debug/tracing/tracing_enabled
|
||||
$ echo 128000 > /debug/tracing/buffer_size_kb
|
||||
Then start again from the top.
|
||||
|
||||
If you are doing a trace for a driver project, e.g. Nouveau, you should also
|
||||
|
|
29
MAINTAINERS
29
MAINTAINERS
|
@ -692,6 +692,13 @@ M: kernel@wantstofly.org
|
|||
L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only)
|
||||
S: Maintained
|
||||
|
||||
ARM/NUVOTON W90X900 ARM ARCHITECTURE
|
||||
P: Wan ZongShun
|
||||
M: mcuos.com@gmail.com
|
||||
L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only)
|
||||
W: http://www.mcuos.com
|
||||
S: Maintained
|
||||
|
||||
ARPD SUPPORT
|
||||
P: Jonathan Layes
|
||||
L: netdev@vger.kernel.org
|
||||
|
@ -1905,10 +1912,10 @@ W: http://gigaset307x.sourceforge.net/
|
|||
S: Maintained
|
||||
|
||||
HARD DRIVE ACTIVE PROTECTION SYSTEM (HDAPS) DRIVER
|
||||
P: Robert Love
|
||||
M: rlove@rlove.org
|
||||
M: linux-kernel@vger.kernel.org
|
||||
W: http://www.kernel.org/pub/linux/kernel/people/rml/hdaps/
|
||||
P: Frank Seidel
|
||||
M: frank@f-seidel.de
|
||||
L: lm-sensors@lm-sensors.org
|
||||
W: http://www.kernel.org/pub/linux/kernel/people/fseidel/hdaps/
|
||||
S: Maintained
|
||||
|
||||
GSPCA FINEPIX SUBDRIVER
|
||||
|
@ -2001,7 +2008,7 @@ S: Maintained
|
|||
|
||||
HIBERNATION (aka Software Suspend, aka swsusp)
|
||||
P: Pavel Machek
|
||||
M: pavel@suse.cz
|
||||
M: pavel@ucw.cz
|
||||
P: Rafael J. Wysocki
|
||||
M: rjw@sisk.pl
|
||||
L: linux-pm@lists.linux-foundation.org
|
||||
|
@ -3327,8 +3334,8 @@ P: Jeremy Fitzhardinge
|
|||
M: jeremy@xensource.com
|
||||
P: Chris Wright
|
||||
M: chrisw@sous-sol.org
|
||||
P: Zachary Amsden
|
||||
M: zach@vmware.com
|
||||
P: Alok Kataria
|
||||
M: akataria@vmware.com
|
||||
P: Rusty Russell
|
||||
M: rusty@rustcorp.com.au
|
||||
L: virtualization@lists.osdl.org
|
||||
|
@ -4172,7 +4179,7 @@ SUSPEND TO RAM
|
|||
P: Len Brown
|
||||
M: len.brown@intel.com
|
||||
P: Pavel Machek
|
||||
M: pavel@suse.cz
|
||||
M: pavel@ucw.cz
|
||||
P: Rafael J. Wysocki
|
||||
M: rjw@sisk.pl
|
||||
L: linux-pm@lists.linux-foundation.org
|
||||
|
@ -4924,11 +4931,11 @@ L: zd1211-devs@lists.sourceforge.net (subscribers-only)
|
|||
S: Maintained
|
||||
|
||||
ZR36067 VIDEO FOR LINUX DRIVER
|
||||
P: Ronald Bultje
|
||||
M: rbultje@ronald.bitfreak.net
|
||||
L: mjpeg-users@lists.sourceforge.net
|
||||
L: linux-media@vger.kernel.org
|
||||
W: http://mjpeg.sourceforge.net/driver-zoran/
|
||||
S: Maintained
|
||||
T: Mercurial http://linuxtv.org/hg/v4l-dvb
|
||||
S: Odd Fixes
|
||||
|
||||
ZS DECSTATION Z85C30 SERIAL DRIVER
|
||||
P: Maciej W. Rozycki
|
||||
|
|
2
Makefile
2
Makefile
|
@ -389,6 +389,7 @@ PHONY += outputmakefile
|
|||
# output directory.
|
||||
outputmakefile:
|
||||
ifneq ($(KBUILD_SRC),)
|
||||
$(Q)ln -fsn $(srctree) source
|
||||
$(Q)$(CONFIG_SHELL) $(srctree)/scripts/mkmakefile \
|
||||
$(srctree) $(objtree) $(VERSION) $(PATCHLEVEL)
|
||||
endif
|
||||
|
@ -946,7 +947,6 @@ ifneq ($(KBUILD_SRC),)
|
|||
mkdir -p include2; \
|
||||
ln -fsn $(srctree)/include/asm-$(SRCARCH) include2/asm; \
|
||||
fi
|
||||
ln -fsn $(srctree) source
|
||||
endif
|
||||
|
||||
# prepare2 creates a makefile if using a separate output directory
|
||||
|
|
2
README
2
README
|
@ -188,7 +188,7 @@ CONFIGURING the kernel:
|
|||
values to random values.
|
||||
|
||||
You can find more information on using the Linux kernel config tools
|
||||
in Documentation/kbuild/make-configs.txt.
|
||||
in Documentation/kbuild/kconfig.txt.
|
||||
|
||||
NOTES on "make config":
|
||||
- having unnecessary drivers will make the kernel bigger, and can
|
||||
|
|
|
@ -93,8 +93,8 @@ common_shutdown_1(void *generic_ptr)
|
|||
if (cpuid != boot_cpuid) {
|
||||
flags |= 0x00040000UL; /* "remain halted" */
|
||||
*pflags = flags;
|
||||
cpu_clear(cpuid, cpu_present_map);
|
||||
cpu_clear(cpuid, cpu_possible_map);
|
||||
set_cpu_present(cpuid, false);
|
||||
set_cpu_possible(cpuid, false);
|
||||
halt();
|
||||
}
|
||||
#endif
|
||||
|
@ -120,8 +120,8 @@ common_shutdown_1(void *generic_ptr)
|
|||
|
||||
#ifdef CONFIG_SMP
|
||||
/* Wait for the secondaries to halt. */
|
||||
cpu_clear(boot_cpuid, cpu_present_map);
|
||||
cpu_clear(boot_cpuid, cpu_possible_map);
|
||||
set_cpu_present(boot_cpuid, false);
|
||||
set_cpu_possible(boot_cpuid, false);
|
||||
while (cpus_weight(cpu_present_map))
|
||||
barrier();
|
||||
#endif
|
||||
|
|
|
@ -120,12 +120,12 @@ void __cpuinit
|
|||
smp_callin(void)
|
||||
{
|
||||
int cpuid = hard_smp_processor_id();
|
||||
cpumask_t mask = cpu_online_map;
|
||||
|
||||
if (cpu_test_and_set(cpuid, mask)) {
|
||||
if (cpu_online(cpuid)) {
|
||||
printk("??, cpu 0x%x already present??\n", cpuid);
|
||||
BUG();
|
||||
}
|
||||
set_cpu_online(cpuid, true);
|
||||
|
||||
/* Turn on machine checks. */
|
||||
wrmces(7);
|
||||
|
@ -436,8 +436,8 @@ setup_smp(void)
|
|||
((char *)cpubase + i*hwrpb->processor_size);
|
||||
if ((cpu->flags & 0x1cc) == 0x1cc) {
|
||||
smp_num_probed++;
|
||||
cpu_set(i, cpu_possible_map);
|
||||
cpu_set(i, cpu_present_map);
|
||||
set_cpu_possible(i, true);
|
||||
set_cpu_present(i, true);
|
||||
cpu->pal_revision = boot_cpu_palrev;
|
||||
}
|
||||
|
||||
|
@ -470,8 +470,8 @@ smp_prepare_cpus(unsigned int max_cpus)
|
|||
|
||||
/* Nothing to do on a UP box, or when told not to. */
|
||||
if (smp_num_probed == 1 || max_cpus == 0) {
|
||||
cpu_possible_map = cpumask_of_cpu(boot_cpuid);
|
||||
cpu_present_map = cpumask_of_cpu(boot_cpuid);
|
||||
init_cpu_possible(cpumask_of(boot_cpuid));
|
||||
init_cpu_present(cpumask_of(boot_cpuid));
|
||||
printk(KERN_INFO "SMP mode deactivated.\n");
|
||||
return;
|
||||
}
|
||||
|
|
|
@ -608,7 +608,7 @@ CONFIG_WATCHDOG_NOWAYOUT=y
|
|||
# Watchdog Device Drivers
|
||||
#
|
||||
# CONFIG_SOFT_WATCHDOG is not set
|
||||
CONFIG_AT91SAM9_WATCHDOG=y
|
||||
CONFIG_AT91SAM9X_WATCHDOG=y
|
||||
|
||||
#
|
||||
# USB-based Watchdog Cards
|
||||
|
|
|
@ -700,7 +700,7 @@ CONFIG_WATCHDOG_NOWAYOUT=y
|
|||
# Watchdog Device Drivers
|
||||
#
|
||||
# CONFIG_SOFT_WATCHDOG is not set
|
||||
CONFIG_AT91SAM9_WATCHDOG=y
|
||||
CONFIG_AT91SAM9X_WATCHDOG=y
|
||||
|
||||
#
|
||||
# USB-based Watchdog Cards
|
||||
|
|
|
@ -710,7 +710,7 @@ CONFIG_WATCHDOG_NOWAYOUT=y
|
|||
# Watchdog Device Drivers
|
||||
#
|
||||
# CONFIG_SOFT_WATCHDOG is not set
|
||||
CONFIG_AT91SAM9_WATCHDOG=y
|
||||
CONFIG_AT91SAM9X_WATCHDOG=y
|
||||
|
||||
#
|
||||
# USB-based Watchdog Cards
|
||||
|
|
|
@ -606,7 +606,7 @@ CONFIG_WATCHDOG_NOWAYOUT=y
|
|||
# Watchdog Device Drivers
|
||||
#
|
||||
# CONFIG_SOFT_WATCHDOG is not set
|
||||
CONFIG_AT91SAM9_WATCHDOG=y
|
||||
CONFIG_AT91SAM9X_WATCHDOG=y
|
||||
|
||||
#
|
||||
# Sonics Silicon Backplane
|
||||
|
|
|
@ -727,7 +727,7 @@ CONFIG_WATCHDOG_NOWAYOUT=y
|
|||
# Watchdog Device Drivers
|
||||
#
|
||||
# CONFIG_SOFT_WATCHDOG is not set
|
||||
# CONFIG_AT91SAM9_WATCHDOG is not set
|
||||
# CONFIG_AT91SAM9X_WATCHDOG is not set
|
||||
|
||||
#
|
||||
# USB-based Watchdog Cards
|
||||
|
|
|
@ -74,9 +74,9 @@ EXPORT_SYMBOL(elf_set_personality);
|
|||
*/
|
||||
int arm_elf_read_implies_exec(const struct elf32_hdr *x, int executable_stack)
|
||||
{
|
||||
if (executable_stack != EXSTACK_ENABLE_X)
|
||||
if (executable_stack != EXSTACK_DISABLE_X)
|
||||
return 1;
|
||||
if (cpu_architecture() <= CPU_ARCH_ARMv6)
|
||||
if (cpu_architecture() < CPU_ARCH_ARMv6)
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -697,7 +697,7 @@ static void __init at91_add_device_rtt(void)
|
|||
* Watchdog
|
||||
* -------------------------------------------------------------------- */
|
||||
|
||||
#if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE)
|
||||
#if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE)
|
||||
static struct platform_device at91cap9_wdt_device = {
|
||||
.name = "at91_wdt",
|
||||
.id = -1,
|
||||
|
|
|
@ -643,7 +643,7 @@ static void __init at91_add_device_rtt(void)
|
|||
* Watchdog
|
||||
* -------------------------------------------------------------------- */
|
||||
|
||||
#if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE)
|
||||
#if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE)
|
||||
static struct platform_device at91sam9260_wdt_device = {
|
||||
.name = "at91_wdt",
|
||||
.id = -1,
|
||||
|
|
|
@ -621,7 +621,7 @@ static void __init at91_add_device_rtt(void)
|
|||
* Watchdog
|
||||
* -------------------------------------------------------------------- */
|
||||
|
||||
#if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE)
|
||||
#if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE)
|
||||
static struct platform_device at91sam9261_wdt_device = {
|
||||
.name = "at91_wdt",
|
||||
.id = -1,
|
||||
|
|
|
@ -854,7 +854,7 @@ static void __init at91_add_device_rtt(void)
|
|||
* Watchdog
|
||||
* -------------------------------------------------------------------- */
|
||||
|
||||
#if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE)
|
||||
#if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE)
|
||||
static struct platform_device at91sam9263_wdt_device = {
|
||||
.name = "at91_wdt",
|
||||
.id = -1,
|
||||
|
|
|
@ -609,7 +609,7 @@ static void __init at91_add_device_rtt(void)
|
|||
* Watchdog
|
||||
* -------------------------------------------------------------------- */
|
||||
|
||||
#if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE)
|
||||
#if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE)
|
||||
static struct platform_device at91sam9rl_wdt_device = {
|
||||
.name = "at91_wdt",
|
||||
.id = -1,
|
||||
|
|
|
@ -490,7 +490,8 @@ postcore_initcall(at91_gpio_debugfs_init);
|
|||
|
||||
/*--------------------------------------------------------------------------*/
|
||||
|
||||
/* This lock class tells lockdep that GPIO irqs are in a different
|
||||
/*
|
||||
* This lock class tells lockdep that GPIO irqs are in a different
|
||||
* category than their parents, so it won't report false recursion.
|
||||
*/
|
||||
static struct lock_class_key gpio_lock_class;
|
||||
|
@ -509,9 +510,6 @@ void __init at91_gpio_irq_setup(void)
|
|||
unsigned id = this->id;
|
||||
unsigned i;
|
||||
|
||||
/* enable PIO controller's clock */
|
||||
clk_enable(this->clock);
|
||||
|
||||
__raw_writel(~0, this->regbase + PIO_IDR);
|
||||
|
||||
for (i = 0, pin = this->chipbase; i < 32; i++, pin++) {
|
||||
|
@ -556,7 +554,14 @@ void __init at91_gpio_init(struct at91_gpio_bank *data, int nr_banks)
|
|||
data->chipbase = PIN_BASE + i * 32;
|
||||
data->regbase = data->offset + (void __iomem *)AT91_VA_BASE_SYS;
|
||||
|
||||
/* AT91SAM9263_ID_PIOCDE groups PIOC, PIOD, PIOE */
|
||||
/* enable PIO controller's clock */
|
||||
clk_enable(data->clock);
|
||||
|
||||
/*
|
||||
* Some processors share peripheral ID between multiple GPIO banks.
|
||||
* SAM9263 (PIOC, PIOD, PIOE)
|
||||
* CAP9 (PIOA, PIOB, PIOC, PIOD)
|
||||
*/
|
||||
if (last && last->id == data->id)
|
||||
last->next = data;
|
||||
}
|
||||
|
|
|
@ -93,6 +93,7 @@ struct atmel_nand_data {
|
|||
u8 enable_pin; /* chip enable */
|
||||
u8 det_pin; /* card detect */
|
||||
u8 rdy_pin; /* ready/busy */
|
||||
u8 rdy_pin_active_low; /* rdy_pin value is inverted */
|
||||
u8 ale; /* address line number connected to ALE */
|
||||
u8 cle; /* address line number connected to CLE */
|
||||
u8 bus_width_16; /* buswidth is 16 bit */
|
||||
|
|
|
@ -1,3 +0,0 @@
|
|||
/*
|
||||
* arch/arm/mach-ep93xx/include/mach/gesbc9312.h
|
||||
*/
|
|
@ -10,7 +10,6 @@
|
|||
|
||||
#include "platform.h"
|
||||
|
||||
#include "gesbc9312.h"
|
||||
#include "ts72xx.h"
|
||||
|
||||
#endif
|
||||
|
|
|
@ -42,7 +42,7 @@ void __init kirkwood_init_irq(void)
|
|||
writel(0, GPIO_EDGE_CAUSE(32));
|
||||
|
||||
for (i = IRQ_KIRKWOOD_GPIO_START; i < NR_IRQS; i++) {
|
||||
set_irq_chip(i, &orion_gpio_irq_level_chip);
|
||||
set_irq_chip(i, &orion_gpio_irq_chip);
|
||||
set_irq_handler(i, handle_level_irq);
|
||||
irq_desc[i].status |= IRQ_LEVEL;
|
||||
set_irq_flags(i, IRQF_VALID);
|
||||
|
|
|
@ -40,7 +40,7 @@ void __init mv78xx0_init_irq(void)
|
|||
writel(0, GPIO_EDGE_CAUSE(0));
|
||||
|
||||
for (i = IRQ_MV78XX0_GPIO_START; i < NR_IRQS; i++) {
|
||||
set_irq_chip(i, &orion_gpio_irq_level_chip);
|
||||
set_irq_chip(i, &orion_gpio_irq_chip);
|
||||
set_irq_handler(i, handle_level_irq);
|
||||
irq_desc[i].status |= IRQ_LEVEL;
|
||||
set_irq_flags(i, IRQF_VALID);
|
||||
|
|
|
@ -565,7 +565,7 @@ u32 omap2_clksel_to_divisor(struct clk *clk, u32 field_val)
|
|||
*
|
||||
* Given a struct clk of a rate-selectable clksel clock, and a clock divisor,
|
||||
* find the corresponding register field value. The return register value is
|
||||
* the value before left-shifting. Returns 0xffffffff on error
|
||||
* the value before left-shifting. Returns ~0 on error
|
||||
*/
|
||||
u32 omap2_divisor_to_clksel(struct clk *clk, u32 div)
|
||||
{
|
||||
|
@ -577,7 +577,7 @@ u32 omap2_divisor_to_clksel(struct clk *clk, u32 div)
|
|||
|
||||
clks = omap2_get_clksel_by_parent(clk, clk->parent);
|
||||
if (clks == NULL)
|
||||
return 0;
|
||||
return ~0;
|
||||
|
||||
for (clkr = clks->rates; clkr->div; clkr++) {
|
||||
if ((clkr->flags & cpu_mask) && (clkr->div == div))
|
||||
|
@ -588,7 +588,7 @@ u32 omap2_divisor_to_clksel(struct clk *clk, u32 div)
|
|||
printk(KERN_ERR "clock: Could not find divisor %d for "
|
||||
"clock %s parent %s\n", div, clk->name,
|
||||
clk->parent->name);
|
||||
return 0;
|
||||
return ~0;
|
||||
}
|
||||
|
||||
return clkr->val;
|
||||
|
@ -708,7 +708,7 @@ static u32 omap2_clksel_get_src_field(void __iomem **src_addr,
|
|||
return 0;
|
||||
|
||||
for (clkr = clks->rates; clkr->div; clkr++) {
|
||||
if (clkr->flags & (cpu_mask | DEFAULT_RATE))
|
||||
if (clkr->flags & cpu_mask && clkr->flags & DEFAULT_RATE)
|
||||
break; /* Found the default rate for this platform */
|
||||
}
|
||||
|
||||
|
@ -746,7 +746,7 @@ int omap2_clk_set_parent(struct clk *clk, struct clk *new_parent)
|
|||
return -EINVAL;
|
||||
|
||||
if (clk->usecount > 0)
|
||||
_omap2_clk_disable(clk);
|
||||
omap2_clk_disable(clk);
|
||||
|
||||
/* Set new source value (previous dividers if any in effect) */
|
||||
reg_val = __raw_readl(src_addr) & ~field_mask;
|
||||
|
@ -759,11 +759,11 @@ int omap2_clk_set_parent(struct clk *clk, struct clk *new_parent)
|
|||
wmb();
|
||||
}
|
||||
|
||||
if (clk->usecount > 0)
|
||||
_omap2_clk_enable(clk);
|
||||
|
||||
clk->parent = new_parent;
|
||||
|
||||
if (clk->usecount > 0)
|
||||
omap2_clk_enable(clk);
|
||||
|
||||
/* CLKSEL clocks follow their parents' rates, divided by a divisor */
|
||||
clk->rate = new_parent->rate;
|
||||
|
||||
|
|
|
@ -44,7 +44,7 @@ void __init orion5x_init_irq(void)
|
|||
* User can use set_type() if he wants to use edge types handlers.
|
||||
*/
|
||||
for (i = IRQ_ORION5X_GPIO_START; i < NR_IRQS; i++) {
|
||||
set_irq_chip(i, &orion_gpio_irq_level_chip);
|
||||
set_irq_chip(i, &orion_gpio_irq_chip);
|
||||
set_irq_handler(i, handle_level_irq);
|
||||
irq_desc[i].status |= IRQ_LEVEL;
|
||||
set_irq_flags(i, IRQF_VALID);
|
||||
|
|
|
@ -693,7 +693,8 @@ static void __init sanity_check_meminfo(void)
|
|||
* Check whether this memory bank would entirely overlap
|
||||
* the vmalloc area.
|
||||
*/
|
||||
if (__va(bank->start) >= VMALLOC_MIN) {
|
||||
if (__va(bank->start) >= VMALLOC_MIN ||
|
||||
__va(bank->start) < PAGE_OFFSET) {
|
||||
printk(KERN_NOTICE "Ignoring RAM at %.8lx-%.8lx "
|
||||
"(vmalloc region overlap).\n",
|
||||
bank->start, bank->start + bank->size - 1);
|
||||
|
|
|
@ -265,51 +265,36 @@ EXPORT_SYMBOL(orion_gpio_set_blink);
|
|||
* polarity LEVEL mask
|
||||
*
|
||||
****************************************************************************/
|
||||
static void gpio_irq_edge_ack(u32 irq)
|
||||
{
|
||||
int pin = irq_to_gpio(irq);
|
||||
|
||||
writel(~(1 << (pin & 31)), GPIO_EDGE_CAUSE(pin));
|
||||
static void gpio_irq_ack(u32 irq)
|
||||
{
|
||||
int type = irq_desc[irq].status & IRQ_TYPE_SENSE_MASK;
|
||||
if (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) {
|
||||
int pin = irq_to_gpio(irq);
|
||||
writel(~(1 << (pin & 31)), GPIO_EDGE_CAUSE(pin));
|
||||
}
|
||||
}
|
||||
|
||||
static void gpio_irq_edge_mask(u32 irq)
|
||||
static void gpio_irq_mask(u32 irq)
|
||||
{
|
||||
int pin = irq_to_gpio(irq);
|
||||
u32 u;
|
||||
|
||||
u = readl(GPIO_EDGE_MASK(pin));
|
||||
int type = irq_desc[irq].status & IRQ_TYPE_SENSE_MASK;
|
||||
u32 reg = (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) ?
|
||||
GPIO_EDGE_MASK(pin) : GPIO_LEVEL_MASK(pin);
|
||||
u32 u = readl(reg);
|
||||
u &= ~(1 << (pin & 31));
|
||||
writel(u, GPIO_EDGE_MASK(pin));
|
||||
writel(u, reg);
|
||||
}
|
||||
|
||||
static void gpio_irq_edge_unmask(u32 irq)
|
||||
static void gpio_irq_unmask(u32 irq)
|
||||
{
|
||||
int pin = irq_to_gpio(irq);
|
||||
u32 u;
|
||||
|
||||
u = readl(GPIO_EDGE_MASK(pin));
|
||||
int type = irq_desc[irq].status & IRQ_TYPE_SENSE_MASK;
|
||||
u32 reg = (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) ?
|
||||
GPIO_EDGE_MASK(pin) : GPIO_LEVEL_MASK(pin);
|
||||
u32 u = readl(reg);
|
||||
u |= 1 << (pin & 31);
|
||||
writel(u, GPIO_EDGE_MASK(pin));
|
||||
}
|
||||
|
||||
static void gpio_irq_level_mask(u32 irq)
|
||||
{
|
||||
int pin = irq_to_gpio(irq);
|
||||
u32 u;
|
||||
|
||||
u = readl(GPIO_LEVEL_MASK(pin));
|
||||
u &= ~(1 << (pin & 31));
|
||||
writel(u, GPIO_LEVEL_MASK(pin));
|
||||
}
|
||||
|
||||
static void gpio_irq_level_unmask(u32 irq)
|
||||
{
|
||||
int pin = irq_to_gpio(irq);
|
||||
u32 u;
|
||||
|
||||
u = readl(GPIO_LEVEL_MASK(pin));
|
||||
u |= 1 << (pin & 31);
|
||||
writel(u, GPIO_LEVEL_MASK(pin));
|
||||
writel(u, reg);
|
||||
}
|
||||
|
||||
static int gpio_irq_set_type(u32 irq, u32 type)
|
||||
|
@ -331,9 +316,9 @@ static int gpio_irq_set_type(u32 irq, u32 type)
|
|||
* Set edge/level type.
|
||||
*/
|
||||
if (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) {
|
||||
desc->chip = &orion_gpio_irq_edge_chip;
|
||||
desc->handle_irq = handle_edge_irq;
|
||||
} else if (type & (IRQ_TYPE_LEVEL_HIGH | IRQ_TYPE_LEVEL_LOW)) {
|
||||
desc->chip = &orion_gpio_irq_level_chip;
|
||||
desc->handle_irq = handle_level_irq;
|
||||
} else {
|
||||
printk(KERN_ERR "failed to set irq=%d (type=%d)\n", irq, type);
|
||||
return -EINVAL;
|
||||
|
@ -371,19 +356,11 @@ static int gpio_irq_set_type(u32 irq, u32 type)
|
|||
return 0;
|
||||
}
|
||||
|
||||
struct irq_chip orion_gpio_irq_edge_chip = {
|
||||
.name = "orion_gpio_irq_edge",
|
||||
.ack = gpio_irq_edge_ack,
|
||||
.mask = gpio_irq_edge_mask,
|
||||
.unmask = gpio_irq_edge_unmask,
|
||||
.set_type = gpio_irq_set_type,
|
||||
};
|
||||
|
||||
struct irq_chip orion_gpio_irq_level_chip = {
|
||||
.name = "orion_gpio_irq_level",
|
||||
.mask = gpio_irq_level_mask,
|
||||
.mask_ack = gpio_irq_level_mask,
|
||||
.unmask = gpio_irq_level_unmask,
|
||||
struct irq_chip orion_gpio_irq_chip = {
|
||||
.name = "orion_gpio",
|
||||
.ack = gpio_irq_ack,
|
||||
.mask = gpio_irq_mask,
|
||||
.unmask = gpio_irq_unmask,
|
||||
.set_type = gpio_irq_set_type,
|
||||
};
|
||||
|
||||
|
|
|
@ -31,8 +31,7 @@ void orion_gpio_set_blink(unsigned pin, int blink);
|
|||
/*
|
||||
* GPIO interrupt handling.
|
||||
*/
|
||||
extern struct irq_chip orion_gpio_irq_edge_chip;
|
||||
extern struct irq_chip orion_gpio_irq_level_chip;
|
||||
extern struct irq_chip orion_gpio_irq_chip;
|
||||
void orion_gpio_irq_handler(int irqoff);
|
||||
|
||||
|
||||
|
|
|
@ -116,6 +116,7 @@ struct atmel_nand_data {
|
|||
int enable_pin; /* chip enable */
|
||||
int det_pin; /* card detect */
|
||||
int rdy_pin; /* ready/busy */
|
||||
u8 rdy_pin_active_low; /* rdy_pin value is inverted */
|
||||
u8 ale; /* address line number connected to ALE */
|
||||
u8 cle; /* address line number connected to CLE */
|
||||
u8 bus_width_16; /* buswidth is 16 bit */
|
||||
|
|
|
@ -221,7 +221,11 @@ config IA64_HP_SIM
|
|||
|
||||
config IA64_XEN_GUEST
|
||||
bool "Xen guest"
|
||||
select SWIOTLB
|
||||
depends on XEN
|
||||
help
|
||||
Build a kernel that runs on Xen guest domain. At this moment only
|
||||
16KB page size in supported.
|
||||
|
||||
endchoice
|
||||
|
||||
|
@ -479,8 +483,7 @@ config HOLES_IN_ZONE
|
|||
default y if VIRTUAL_MEM_MAP
|
||||
|
||||
config HAVE_ARCH_EARLY_PFN_TO_NID
|
||||
def_bool y
|
||||
depends on NEED_MULTIPLE_NODES
|
||||
def_bool NUMA && SPARSEMEM
|
||||
|
||||
config HAVE_ARCH_NODEDATA_EXTENSION
|
||||
def_bool y
|
||||
|
|
1601
arch/ia64/configs/xen_domu_defconfig
Normal file
1601
arch/ia64/configs/xen_domu_defconfig
Normal file
File diff suppressed because it is too large
Load diff
|
@ -25,6 +25,10 @@
|
|||
|
||||
#include <linux/ioctl.h>
|
||||
|
||||
/* Select x86 specific features in <linux/kvm.h> */
|
||||
#define __KVM_HAVE_IOAPIC
|
||||
#define __KVM_HAVE_DEVICE_ASSIGNMENT
|
||||
|
||||
/* Architectural interrupt line count. */
|
||||
#define KVM_NR_INTERRUPTS 256
|
||||
|
||||
|
|
|
@ -31,10 +31,6 @@ static inline int pfn_to_nid(unsigned long pfn)
|
|||
#endif
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID
|
||||
extern int early_pfn_to_nid(unsigned long pfn);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_IA64_DIG /* DIG systems are small */
|
||||
# define MAX_PHYSNODE_ID 8
|
||||
# define NR_NODE_MEMBLKS (MAX_NUMNODES * 8)
|
||||
|
|
|
@ -39,7 +39,7 @@
|
|||
/* BTE status register only supports 16 bits for length field */
|
||||
#define BTE_LEN_BITS (16)
|
||||
#define BTE_LEN_MASK ((1 << BTE_LEN_BITS) - 1)
|
||||
#define BTE_MAX_XFER ((1 << BTE_LEN_BITS) * L1_CACHE_BYTES)
|
||||
#define BTE_MAX_XFER (BTE_LEN_MASK << L1_CACHE_SHIFT)
|
||||
|
||||
|
||||
/* Define hardware */
|
||||
|
|
|
@ -736,14 +736,15 @@ int __cpu_disable(void)
|
|||
return -EBUSY;
|
||||
}
|
||||
|
||||
cpu_clear(cpu, cpu_online_map);
|
||||
|
||||
if (migrate_platform_irqs(cpu)) {
|
||||
cpu_set(cpu, cpu_online_map);
|
||||
return (-EBUSY);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
remove_siblinginfo(cpu);
|
||||
fixup_irqs();
|
||||
cpu_clear(cpu, cpu_online_map);
|
||||
local_flush_tlb_all();
|
||||
cpu_clear(cpu, cpu_callin_map);
|
||||
return 0;
|
||||
|
|
|
@ -1337,6 +1337,10 @@ static void kvm_release_vm_pages(struct kvm *kvm)
|
|||
}
|
||||
}
|
||||
|
||||
void kvm_arch_sync_events(struct kvm *kvm)
|
||||
{
|
||||
}
|
||||
|
||||
void kvm_arch_destroy_vm(struct kvm *kvm)
|
||||
{
|
||||
kvm_iommu_unmap_guest(kvm);
|
||||
|
|
|
@ -455,13 +455,18 @@ fpswa_ret_t vmm_fp_emulate(int fp_fault, void *bundle, unsigned long *ipsr,
|
|||
if (!vmm_fpswa_interface)
|
||||
return (fpswa_ret_t) {-1, 0, 0, 0};
|
||||
|
||||
/*
|
||||
* Just let fpswa driver to use hardware fp registers.
|
||||
* No fp register is valid in memory.
|
||||
*/
|
||||
memset(&fp_state, 0, sizeof(fp_state_t));
|
||||
|
||||
/*
|
||||
* compute fp_state. only FP registers f6 - f11 are used by the
|
||||
* vmm, so set those bits in the mask and set the low volatile
|
||||
* pointer to point to these registers.
|
||||
*/
|
||||
fp_state.bitmask_low64 = 0xfc0; /* bit6..bit11 */
|
||||
|
||||
fp_state.fp_state_low_volatile = (fp_state_low_volatile_t *) ®s->f6;
|
||||
|
||||
/*
|
||||
* unsigned long (*EFI_FPSWA) (
|
||||
* unsigned long trap_type,
|
||||
* void *Bundle,
|
||||
|
@ -545,10 +550,6 @@ void reflect_interruption(u64 ifa, u64 isr, u64 iim,
|
|||
status = vmm_handle_fpu_swa(0, regs, isr);
|
||||
if (!status)
|
||||
return ;
|
||||
else if (-EAGAIN == status) {
|
||||
vcpu_decrement_iip(vcpu);
|
||||
return ;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
|
|
|
@ -58,7 +58,7 @@ paddr_to_nid(unsigned long paddr)
|
|||
* SPARSEMEM to allocate the SPARSEMEM sectionmap on the NUMA node where
|
||||
* the section resides.
|
||||
*/
|
||||
int early_pfn_to_nid(unsigned long pfn)
|
||||
int __meminit __early_pfn_to_nid(unsigned long pfn)
|
||||
{
|
||||
int i, section = pfn >> PFN_SECTION_SHIFT, ssec, esec;
|
||||
|
||||
|
@ -70,7 +70,7 @@ int early_pfn_to_nid(unsigned long pfn)
|
|||
return node_memblk[i].nid;
|
||||
}
|
||||
|
||||
return 0;
|
||||
return -1;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MEMORY_HOTPLUG
|
||||
|
|
|
@ -97,9 +97,10 @@ bte_result_t bte_copy(u64 src, u64 dest, u64 len, u64 mode, void *notification)
|
|||
return BTE_SUCCESS;
|
||||
}
|
||||
|
||||
BUG_ON((len & L1_CACHE_MASK) ||
|
||||
(src & L1_CACHE_MASK) || (dest & L1_CACHE_MASK));
|
||||
BUG_ON(!(len < ((BTE_LEN_MASK + 1) << L1_CACHE_SHIFT)));
|
||||
BUG_ON(len & L1_CACHE_MASK);
|
||||
BUG_ON(src & L1_CACHE_MASK);
|
||||
BUG_ON(dest & L1_CACHE_MASK);
|
||||
BUG_ON(len > BTE_MAX_XFER);
|
||||
|
||||
/*
|
||||
* Start with interface corresponding to cpu number
|
||||
|
|
|
@ -8,8 +8,7 @@ config XEN
|
|||
depends on PARAVIRT && MCKINLEY && IA64_PAGE_SIZE_16KB && EXPERIMENTAL
|
||||
select XEN_XENCOMM
|
||||
select NO_IDLE_HZ
|
||||
|
||||
# those are required to save/restore.
|
||||
# followings are required to save/restore.
|
||||
select ARCH_SUSPEND_POSSIBLE
|
||||
select SUSPEND
|
||||
select PM_SLEEP
|
||||
|
|
|
@ -153,7 +153,7 @@ xen_post_smp_prepare_boot_cpu(void)
|
|||
xen_setup_vcpu_info_placement();
|
||||
}
|
||||
|
||||
static const struct pv_init_ops xen_init_ops __initdata = {
|
||||
static const struct pv_init_ops xen_init_ops __initconst = {
|
||||
.banner = xen_banner,
|
||||
|
||||
.reserve_memory = xen_reserve_memory,
|
||||
|
@ -337,7 +337,7 @@ xen_iosapic_write(char __iomem *iosapic, unsigned int reg, u32 val)
|
|||
HYPERVISOR_physdev_op(PHYSDEVOP_apic_write, &apic_op);
|
||||
}
|
||||
|
||||
static const struct pv_iosapic_ops xen_iosapic_ops __initdata = {
|
||||
static const struct pv_iosapic_ops xen_iosapic_ops __initconst = {
|
||||
.pcat_compat_init = xen_pcat_compat_init,
|
||||
.__get_irq_chip = xen_iosapic_get_irq_chip,
|
||||
|
||||
|
|
|
@ -7,6 +7,7 @@ mainmenu "Linux Kernel Configuration"
|
|||
|
||||
config MN10300
|
||||
def_bool y
|
||||
select HAVE_OPROFILE
|
||||
|
||||
config AM33
|
||||
def_bool y
|
||||
|
|
|
@ -173,7 +173,7 @@ static int pci_ampci_write_config_byte(struct pci_bus *bus, unsigned int devfn,
|
|||
BRIDGEREGB(where) = value;
|
||||
} else {
|
||||
if (bus->number == 0 &&
|
||||
(devfn == PCI_DEVFN(2, 0) && devfn == PCI_DEVFN(3, 0))
|
||||
(devfn == PCI_DEVFN(2, 0) || devfn == PCI_DEVFN(3, 0))
|
||||
)
|
||||
__pcidebug("<= %02x", bus, devfn, where, value);
|
||||
CONFIG_ADDRESS = CONFIG_CMD(bus, devfn, where);
|
||||
|
|
|
@ -60,7 +60,7 @@
|
|||
/* It should be preserving the high 48 bits and then specifically */
|
||||
/* preserving _PAGE_SECONDARY | _PAGE_GROUP_IX */
|
||||
#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | \
|
||||
_PAGE_HPTEFLAGS)
|
||||
_PAGE_HPTEFLAGS | _PAGE_SPECIAL)
|
||||
|
||||
/* Bits to mask out from a PMD to get to the PTE page */
|
||||
#define PMD_MASKED_BITS 0
|
||||
|
|
|
@ -114,7 +114,7 @@ static inline struct subpage_prot_table *pgd_subpage_prot(pgd_t *pgd)
|
|||
* pgprot changes
|
||||
*/
|
||||
#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
|
||||
_PAGE_ACCESSED)
|
||||
_PAGE_ACCESSED | _PAGE_SPECIAL)
|
||||
|
||||
/* Bits to mask out from a PMD to get to the PTE page */
|
||||
#define PMD_MASKED_BITS 0x1ff
|
||||
|
|
|
@ -429,7 +429,8 @@ extern int icache_44x_need_flush;
|
|||
#define PMD_PAGE_SIZE(pmd) bad_call_to_PMD_PAGE_SIZE()
|
||||
#endif
|
||||
|
||||
#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
|
||||
#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | \
|
||||
_PAGE_SPECIAL)
|
||||
|
||||
|
||||
#define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \
|
||||
|
|
|
@ -646,11 +646,16 @@ static int emulate_vsx(unsigned char __user *addr, unsigned int reg,
|
|||
unsigned int areg, struct pt_regs *regs,
|
||||
unsigned int flags, unsigned int length)
|
||||
{
|
||||
char *ptr = (char *) ¤t->thread.TS_FPR(reg);
|
||||
char *ptr;
|
||||
int ret = 0;
|
||||
|
||||
flush_vsx_to_thread(current);
|
||||
|
||||
if (reg < 32)
|
||||
ptr = (char *) ¤t->thread.TS_FPR(reg);
|
||||
else
|
||||
ptr = (char *) ¤t->thread.vr[reg - 32];
|
||||
|
||||
if (flags & ST)
|
||||
ret = __copy_to_user(addr, ptr, length);
|
||||
else {
|
||||
|
|
|
@ -125,6 +125,10 @@ static void kvmppc_free_vcpus(struct kvm *kvm)
|
|||
}
|
||||
}
|
||||
|
||||
void kvm_arch_sync_events(struct kvm *kvm)
|
||||
{
|
||||
}
|
||||
|
||||
void kvm_arch_destroy_vm(struct kvm *kvm)
|
||||
{
|
||||
kvmppc_free_vcpus(kvm);
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
#include <linux/notifier.h>
|
||||
#include <linux/lmb.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/pfn.h>
|
||||
#include <asm/sparsemem.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/system.h>
|
||||
|
@ -882,7 +883,7 @@ static void mark_reserved_regions_for_nid(int nid)
|
|||
unsigned long physbase = lmb.reserved.region[i].base;
|
||||
unsigned long size = lmb.reserved.region[i].size;
|
||||
unsigned long start_pfn = physbase >> PAGE_SHIFT;
|
||||
unsigned long end_pfn = ((physbase + size) >> PAGE_SHIFT);
|
||||
unsigned long end_pfn = PFN_UP(physbase + size);
|
||||
struct node_active_region node_ar;
|
||||
unsigned long node_end_pfn = node->node_start_pfn +
|
||||
node->node_spanned_pages;
|
||||
|
@ -908,7 +909,7 @@ static void mark_reserved_regions_for_nid(int nid)
|
|||
*/
|
||||
if (end_pfn > node_ar.end_pfn)
|
||||
reserve_size = (node_ar.end_pfn << PAGE_SHIFT)
|
||||
- (start_pfn << PAGE_SHIFT);
|
||||
- physbase;
|
||||
/*
|
||||
* Only worry about *this* node, others may not
|
||||
* yet have valid NODE_DATA().
|
||||
|
|
|
@ -328,7 +328,7 @@ static int __init ps3_mm_add_memory(void)
|
|||
return result;
|
||||
}
|
||||
|
||||
core_initcall(ps3_mm_add_memory);
|
||||
device_initcall(ps3_mm_add_memory);
|
||||
|
||||
/*============================================================================*/
|
||||
/* dma routines */
|
||||
|
|
|
@ -145,7 +145,7 @@ cputime_to_timeval(const cputime_t cputime, struct timeval *value)
|
|||
value->tv_usec = rp.subreg.even / 4096;
|
||||
value->tv_sec = rp.subreg.odd;
|
||||
#else
|
||||
value->tv_usec = cputime % 4096000000ULL;
|
||||
value->tv_usec = (cputime % 4096000000ULL) / 4096;
|
||||
value->tv_sec = cputime / 4096000000ULL;
|
||||
#endif
|
||||
}
|
||||
|
|
|
@ -43,6 +43,8 @@ struct mem_chunk {
|
|||
|
||||
extern struct mem_chunk memory_chunk[];
|
||||
extern unsigned long real_memory_size;
|
||||
extern int memory_end_set;
|
||||
extern unsigned long memory_end;
|
||||
|
||||
void detect_memory_layout(struct mem_chunk chunk[]);
|
||||
|
||||
|
|
|
@ -82,7 +82,9 @@ char elf_platform[ELF_PLATFORM_SIZE];
|
|||
|
||||
struct mem_chunk __initdata memory_chunk[MEMORY_CHUNKS];
|
||||
volatile int __cpu_logical_map[NR_CPUS]; /* logical cpu to cpu address */
|
||||
static unsigned long __initdata memory_end;
|
||||
|
||||
int __initdata memory_end_set;
|
||||
unsigned long __initdata memory_end;
|
||||
|
||||
/*
|
||||
* This is set up by the setup-routine at boot-time
|
||||
|
@ -281,6 +283,7 @@ void (*pm_power_off)(void) = machine_power_off;
|
|||
static int __init early_parse_mem(char *p)
|
||||
{
|
||||
memory_end = memparse(p, &p);
|
||||
memory_end_set = 1;
|
||||
return 0;
|
||||
}
|
||||
early_param("mem", early_parse_mem);
|
||||
|
@ -508,8 +511,10 @@ static void __init setup_memory_end(void)
|
|||
int i;
|
||||
|
||||
#if defined(CONFIG_ZFCPDUMP) || defined(CONFIG_ZFCPDUMP_MODULE)
|
||||
if (ipl_info.type == IPL_TYPE_FCP_DUMP)
|
||||
if (ipl_info.type == IPL_TYPE_FCP_DUMP) {
|
||||
memory_end = ZFCPDUMP_HSA_SIZE;
|
||||
memory_end_set = 1;
|
||||
}
|
||||
#endif
|
||||
memory_size = 0;
|
||||
memory_end &= PAGE_MASK;
|
||||
|
|
|
@ -212,6 +212,10 @@ static void kvm_free_vcpus(struct kvm *kvm)
|
|||
}
|
||||
}
|
||||
|
||||
void kvm_arch_sync_events(struct kvm *kvm)
|
||||
{
|
||||
}
|
||||
|
||||
void kvm_arch_destroy_vm(struct kvm *kvm)
|
||||
{
|
||||
kvm_free_vcpus(kvm);
|
||||
|
|
|
@ -78,7 +78,7 @@ void vde_init_libstuff(struct vde_data *vpri, struct vde_init *init)
|
|||
{
|
||||
struct vde_open_args *args;
|
||||
|
||||
vpri->args = kmalloc(sizeof(struct vde_open_args), UM_GFP_KERNEL);
|
||||
vpri->args = uml_kmalloc(sizeof(struct vde_open_args), UM_GFP_KERNEL);
|
||||
if (vpri->args == NULL) {
|
||||
printk(UM_KERN_ERR "vde_init_libstuff - vde_open_args "
|
||||
"allocation failed");
|
||||
|
@ -91,8 +91,8 @@ void vde_init_libstuff(struct vde_data *vpri, struct vde_init *init)
|
|||
args->group = init->group;
|
||||
args->mode = init->mode ? init->mode : 0700;
|
||||
|
||||
args->port ? printk(UM_KERN_INFO "port %d", args->port) :
|
||||
printk(UM_KERN_INFO "undefined port");
|
||||
args->port ? printk("port %d", args->port) :
|
||||
printk("undefined port");
|
||||
}
|
||||
|
||||
int vde_user_read(void *conn, void *buf, int len)
|
||||
|
|
|
@ -174,28 +174,8 @@ config IOMMU_LEAK
|
|||
Add a simple leak tracer to the IOMMU code. This is useful when you
|
||||
are debugging a buggy device driver that leaks IOMMU mappings.
|
||||
|
||||
config MMIOTRACE
|
||||
bool "Memory mapped IO tracing"
|
||||
depends on DEBUG_KERNEL && PCI
|
||||
select TRACING
|
||||
help
|
||||
Mmiotrace traces Memory Mapped I/O access and is meant for
|
||||
debugging and reverse engineering. It is called from the ioremap
|
||||
implementation and works via page faults. Tracing is disabled by
|
||||
default and can be enabled at run-time.
|
||||
|
||||
See Documentation/tracers/mmiotrace.txt.
|
||||
If you are not helping to develop drivers, say N.
|
||||
|
||||
config MMIOTRACE_TEST
|
||||
tristate "Test module for mmiotrace"
|
||||
depends on MMIOTRACE && m
|
||||
help
|
||||
This is a dumb module for testing mmiotrace. It is very dangerous
|
||||
as it will write garbage to IO memory starting at a given address.
|
||||
However, it should be safe to use on e.g. unused portion of VRAM.
|
||||
|
||||
Say N, unless you absolutely know what you are doing.
|
||||
config HAVE_MMIOTRACE_SUPPORT
|
||||
def_bool y
|
||||
|
||||
#
|
||||
# IO delay types:
|
||||
|
|
|
@ -9,6 +9,13 @@
|
|||
#include <linux/types.h>
|
||||
#include <linux/ioctl.h>
|
||||
|
||||
/* Select x86 specific features in <linux/kvm.h> */
|
||||
#define __KVM_HAVE_PIT
|
||||
#define __KVM_HAVE_IOAPIC
|
||||
#define __KVM_HAVE_DEVICE_ASSIGNMENT
|
||||
#define __KVM_HAVE_MSI
|
||||
#define __KVM_HAVE_USER_NMI
|
||||
|
||||
/* Architectural interrupt line count. */
|
||||
#define KVM_NR_INTERRUPTS 256
|
||||
|
||||
|
|
|
@ -32,8 +32,6 @@ static inline void get_memcfg_numa(void)
|
|||
get_memcfg_numa_flat();
|
||||
}
|
||||
|
||||
extern int early_pfn_to_nid(unsigned long pfn);
|
||||
|
||||
extern void resume_map_numa_kva(pgd_t *pgd);
|
||||
|
||||
#else /* !CONFIG_NUMA */
|
||||
|
|
|
@ -40,8 +40,6 @@ static inline __attribute__((pure)) int phys_to_nid(unsigned long addr)
|
|||
#define node_end_pfn(nid) (NODE_DATA(nid)->node_start_pfn + \
|
||||
NODE_DATA(nid)->node_spanned_pages)
|
||||
|
||||
extern int early_pfn_to_nid(unsigned long pfn);
|
||||
|
||||
#ifdef CONFIG_NUMA_EMU
|
||||
#define FAKE_NODE_MIN_SIZE (64 * 1024 * 1024)
|
||||
#define FAKE_NODE_MIN_HASH_MASK (~(FAKE_NODE_MIN_SIZE - 1UL))
|
||||
|
|
|
@ -57,7 +57,6 @@ typedef struct { pgdval_t pgd; } pgd_t;
|
|||
typedef struct { pgprotval_t pgprot; } pgprot_t;
|
||||
|
||||
extern int page_is_ram(unsigned long pagenr);
|
||||
extern int pagerange_is_ram(unsigned long start, unsigned long end);
|
||||
extern int devmem_is_allowed(unsigned long pagenr);
|
||||
extern void map_devmem(unsigned long pfn, unsigned long size,
|
||||
pgprot_t vma_prot);
|
||||
|
|
|
@ -1352,14 +1352,7 @@ static inline void arch_leave_lazy_cpu_mode(void)
|
|||
PVOP_VCALL0(pv_cpu_ops.lazy_mode.leave);
|
||||
}
|
||||
|
||||
static inline void arch_flush_lazy_cpu_mode(void)
|
||||
{
|
||||
if (unlikely(paravirt_get_lazy_mode() == PARAVIRT_LAZY_CPU)) {
|
||||
arch_leave_lazy_cpu_mode();
|
||||
arch_enter_lazy_cpu_mode();
|
||||
}
|
||||
}
|
||||
|
||||
void arch_flush_lazy_cpu_mode(void);
|
||||
|
||||
#define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
|
||||
static inline void arch_enter_lazy_mmu_mode(void)
|
||||
|
@ -1372,13 +1365,7 @@ static inline void arch_leave_lazy_mmu_mode(void)
|
|||
PVOP_VCALL0(pv_mmu_ops.lazy_mode.leave);
|
||||
}
|
||||
|
||||
static inline void arch_flush_lazy_mmu_mode(void)
|
||||
{
|
||||
if (unlikely(paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU)) {
|
||||
arch_leave_lazy_mmu_mode();
|
||||
arch_enter_lazy_mmu_mode();
|
||||
}
|
||||
}
|
||||
void arch_flush_lazy_mmu_mode(void);
|
||||
|
||||
static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
|
||||
unsigned long phys, pgprot_t flags)
|
||||
|
|
|
@ -13,7 +13,6 @@
|
|||
* Hooray, we are in Long 64-bit mode (but still running in low memory)
|
||||
*/
|
||||
ENTRY(wakeup_long64)
|
||||
wakeup_long64:
|
||||
movq saved_magic, %rax
|
||||
movq $0x123456789abcdef0, %rdx
|
||||
cmpq %rdx, %rax
|
||||
|
@ -34,16 +33,12 @@ wakeup_long64:
|
|||
|
||||
movq saved_rip, %rax
|
||||
jmp *%rax
|
||||
ENDPROC(wakeup_long64)
|
||||
|
||||
bogus_64_magic:
|
||||
jmp bogus_64_magic
|
||||
|
||||
.align 2
|
||||
.p2align 4,,15
|
||||
.globl do_suspend_lowlevel
|
||||
.type do_suspend_lowlevel,@function
|
||||
do_suspend_lowlevel:
|
||||
.LFB5:
|
||||
ENTRY(do_suspend_lowlevel)
|
||||
subq $8, %rsp
|
||||
xorl %eax, %eax
|
||||
call save_processor_state
|
||||
|
@ -67,7 +62,7 @@ do_suspend_lowlevel:
|
|||
pushfq
|
||||
popq pt_regs_flags(%rax)
|
||||
|
||||
movq $.L97, saved_rip(%rip)
|
||||
movq $resume_point, saved_rip(%rip)
|
||||
|
||||
movq %rsp, saved_rsp
|
||||
movq %rbp, saved_rbp
|
||||
|
@ -78,14 +73,12 @@ do_suspend_lowlevel:
|
|||
addq $8, %rsp
|
||||
movl $3, %edi
|
||||
xorl %eax, %eax
|
||||
jmp acpi_enter_sleep_state
|
||||
.L97:
|
||||
.p2align 4,,7
|
||||
.L99:
|
||||
.align 4
|
||||
movl $24, %eax
|
||||
movw %ax, %ds
|
||||
call acpi_enter_sleep_state
|
||||
/* in case something went wrong, restore the machine status and go on */
|
||||
jmp resume_point
|
||||
|
||||
.align 4
|
||||
resume_point:
|
||||
/* We don't restore %rax, it must be 0 anyway */
|
||||
movq $saved_context, %rax
|
||||
movq saved_context_cr4(%rax), %rbx
|
||||
|
@ -117,12 +110,9 @@ do_suspend_lowlevel:
|
|||
xorl %eax, %eax
|
||||
addq $8, %rsp
|
||||
jmp restore_processor_state
|
||||
.LFE5:
|
||||
.Lfe5:
|
||||
.size do_suspend_lowlevel, .Lfe5-do_suspend_lowlevel
|
||||
|
||||
ENDPROC(do_suspend_lowlevel)
|
||||
|
||||
.data
|
||||
ALIGN
|
||||
ENTRY(saved_rbp) .quad 0
|
||||
ENTRY(saved_rsi) .quad 0
|
||||
ENTRY(saved_rdi) .quad 0
|
||||
|
|
|
@ -862,7 +862,7 @@ void clear_local_APIC(void)
|
|||
}
|
||||
|
||||
/* lets not touch this if we didn't frob it */
|
||||
#if defined(CONFIG_X86_MCE_P4THERMAL) || defined(X86_MCE_INTEL)
|
||||
#if defined(CONFIG_X86_MCE_P4THERMAL) || defined(CONFIG_X86_MCE_INTEL)
|
||||
if (maxlvt >= 5) {
|
||||
v = apic_read(APIC_LVTTHMR);
|
||||
apic_write(APIC_LVTTHMR, v | APIC_LVT_MASKED);
|
||||
|
|
|
@ -1157,8 +1157,7 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol)
|
|||
data->cpu = pol->cpu;
|
||||
data->currpstate = HW_PSTATE_INVALID;
|
||||
|
||||
rc = powernow_k8_cpu_init_acpi(data);
|
||||
if (rc) {
|
||||
if (powernow_k8_cpu_init_acpi(data)) {
|
||||
/*
|
||||
* Use the PSB BIOS structure. This is only availabe on
|
||||
* an UP version, and is deprecated by AMD.
|
||||
|
@ -1176,17 +1175,20 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol)
|
|||
"ACPI maintainers and complain to your BIOS "
|
||||
"vendor.\n");
|
||||
#endif
|
||||
goto err_out;
|
||||
kfree(data);
|
||||
return -ENODEV;
|
||||
}
|
||||
if (pol->cpu != 0) {
|
||||
printk(KERN_ERR FW_BUG PFX "No ACPI _PSS objects for "
|
||||
"CPU other than CPU0. Complain to your BIOS "
|
||||
"vendor.\n");
|
||||
goto err_out;
|
||||
kfree(data);
|
||||
return -ENODEV;
|
||||
}
|
||||
rc = find_psb_table(data);
|
||||
if (rc) {
|
||||
goto err_out;
|
||||
kfree(data);
|
||||
return -ENODEV;
|
||||
}
|
||||
/* Take a crude guess here.
|
||||
* That guess was in microseconds, so multiply with 1000 */
|
||||
|
|
|
@ -295,11 +295,11 @@ void do_machine_check(struct pt_regs * regs, long error_code)
|
|||
* If we know that the error was in user space, send a
|
||||
* SIGBUS. Otherwise, panic if tolerance is low.
|
||||
*
|
||||
* do_exit() takes an awful lot of locks and has a slight
|
||||
* force_sig() takes an awful lot of locks and has a slight
|
||||
* risk of deadlocking.
|
||||
*/
|
||||
if (user_space) {
|
||||
do_exit(SIGBUS);
|
||||
force_sig(SIGBUS, current);
|
||||
} else if (panic_on_oops || tolerant < 2) {
|
||||
mce_panic("Uncorrected machine check",
|
||||
&panicm, mcestart);
|
||||
|
@ -490,7 +490,7 @@ static void __cpuinit mce_cpu_quirks(struct cpuinfo_x86 *c)
|
|||
|
||||
}
|
||||
|
||||
static void __cpuinit mce_cpu_features(struct cpuinfo_x86 *c)
|
||||
static void mce_cpu_features(struct cpuinfo_x86 *c)
|
||||
{
|
||||
switch (c->x86_vendor) {
|
||||
case X86_VENDOR_INTEL:
|
||||
|
@ -734,6 +734,7 @@ __setup("mce=", mcheck_enable);
|
|||
static int mce_resume(struct sys_device *dev)
|
||||
{
|
||||
mce_init(NULL);
|
||||
mce_cpu_features(¤t_cpu_data);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -121,7 +121,7 @@ static long threshold_restart_bank(void *_tr)
|
|||
}
|
||||
|
||||
/* cpu init entry point, called from mce.c with preempt off */
|
||||
void __cpuinit mce_amd_feature_init(struct cpuinfo_x86 *c)
|
||||
void mce_amd_feature_init(struct cpuinfo_x86 *c)
|
||||
{
|
||||
unsigned int bank, block;
|
||||
unsigned int cpu = smp_processor_id();
|
||||
|
|
|
@ -30,7 +30,7 @@ asmlinkage void smp_thermal_interrupt(void)
|
|||
irq_exit();
|
||||
}
|
||||
|
||||
static void __cpuinit intel_init_thermal(struct cpuinfo_x86 *c)
|
||||
static void intel_init_thermal(struct cpuinfo_x86 *c)
|
||||
{
|
||||
u32 l, h;
|
||||
int tm2 = 0;
|
||||
|
@ -84,7 +84,7 @@ static void __cpuinit intel_init_thermal(struct cpuinfo_x86 *c)
|
|||
return;
|
||||
}
|
||||
|
||||
void __cpuinit mce_intel_feature_init(struct cpuinfo_x86 *c)
|
||||
void mce_intel_feature_init(struct cpuinfo_x86 *c)
|
||||
{
|
||||
intel_init_thermal(c);
|
||||
}
|
||||
|
|
|
@ -269,6 +269,8 @@ static void hpet_set_mode(enum clock_event_mode mode,
|
|||
now = hpet_readl(HPET_COUNTER);
|
||||
cmp = now + (unsigned long) delta;
|
||||
cfg = hpet_readl(HPET_Tn_CFG(timer));
|
||||
/* Make sure we use edge triggered interrupts */
|
||||
cfg &= ~HPET_TN_LEVEL;
|
||||
cfg |= HPET_TN_ENABLE | HPET_TN_PERIODIC |
|
||||
HPET_TN_SETVAL | HPET_TN_32BIT;
|
||||
hpet_writel(cfg, HPET_Tn_CFG(timer));
|
||||
|
|
|
@ -203,7 +203,7 @@ static void __init platform_detect(void)
|
|||
static void __init platform_detect(void)
|
||||
{
|
||||
/* stopgap until OFW support is added to the kernel */
|
||||
olpc_platform_info.boardrev = 0xc2;
|
||||
olpc_platform_info.boardrev = olpc_board(0xc2);
|
||||
}
|
||||
#endif
|
||||
|
||||
|
|
|
@ -268,6 +268,32 @@ enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
|
|||
return __get_cpu_var(paravirt_lazy_mode);
|
||||
}
|
||||
|
||||
void arch_flush_lazy_mmu_mode(void)
|
||||
{
|
||||
preempt_disable();
|
||||
|
||||
if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU) {
|
||||
WARN_ON(preempt_count() == 1);
|
||||
arch_leave_lazy_mmu_mode();
|
||||
arch_enter_lazy_mmu_mode();
|
||||
}
|
||||
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
void arch_flush_lazy_cpu_mode(void)
|
||||
{
|
||||
preempt_disable();
|
||||
|
||||
if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_CPU) {
|
||||
WARN_ON(preempt_count() == 1);
|
||||
arch_leave_lazy_cpu_mode();
|
||||
arch_enter_lazy_cpu_mode();
|
||||
}
|
||||
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
struct pv_info pv_info = {
|
||||
.name = "bare hardware",
|
||||
.paravirt_enabled = 0,
|
||||
|
|
|
@ -104,9 +104,6 @@ void cpu_idle(void)
|
|||
check_pgt_cache();
|
||||
rmb();
|
||||
|
||||
if (rcu_pending(cpu))
|
||||
rcu_check_callbacks(cpu, 0);
|
||||
|
||||
if (cpu_is_offline(cpu))
|
||||
play_dead();
|
||||
|
||||
|
|
|
@ -810,12 +810,16 @@ static void ptrace_bts_untrace(struct task_struct *child)
|
|||
|
||||
static void ptrace_bts_detach(struct task_struct *child)
|
||||
{
|
||||
if (unlikely(child->bts)) {
|
||||
ds_release_bts(child->bts);
|
||||
child->bts = NULL;
|
||||
|
||||
ptrace_bts_free_buffer(child);
|
||||
}
|
||||
/*
|
||||
* Ptrace_detach() races with ptrace_untrace() in case
|
||||
* the child dies and is reaped by another thread.
|
||||
*
|
||||
* We only do the memory accounting at this point and
|
||||
* leave the buffer deallocation and the bts tracer
|
||||
* release to ptrace_bts_untrace() which will be called
|
||||
* later on with tasklist_lock held.
|
||||
*/
|
||||
release_locked_buffer(child->bts_buffer, child->bts_size);
|
||||
}
|
||||
#else
|
||||
static inline void ptrace_bts_fork(struct task_struct *tsk) {}
|
||||
|
|
|
@ -99,6 +99,12 @@ static inline void preempt_conditional_sti(struct pt_regs *regs)
|
|||
local_irq_enable();
|
||||
}
|
||||
|
||||
static inline void conditional_cli(struct pt_regs *regs)
|
||||
{
|
||||
if (regs->flags & X86_EFLAGS_IF)
|
||||
local_irq_disable();
|
||||
}
|
||||
|
||||
static inline void preempt_conditional_cli(struct pt_regs *regs)
|
||||
{
|
||||
if (regs->flags & X86_EFLAGS_IF)
|
||||
|
@ -626,8 +632,10 @@ dotraplinkage void __kprobes do_debug(struct pt_regs *regs, long error_code)
|
|||
|
||||
#ifdef CONFIG_X86_32
|
||||
debug_vm86:
|
||||
/* reenable preemption: handle_vm86_trap() might sleep */
|
||||
dec_preempt_count();
|
||||
handle_vm86_trap((struct kernel_vm86_regs *) regs, error_code, 1);
|
||||
preempt_conditional_cli(regs);
|
||||
conditional_cli(regs);
|
||||
return;
|
||||
#endif
|
||||
|
||||
|
|
|
@ -283,10 +283,13 @@ void __devinit vmi_time_ap_init(void)
|
|||
#endif
|
||||
|
||||
/** vmi clocksource */
|
||||
static struct clocksource clocksource_vmi;
|
||||
|
||||
static cycle_t read_real_cycles(void)
|
||||
{
|
||||
return vmi_timer_ops.get_cycle_counter(VMI_CYCLES_REAL);
|
||||
cycle_t ret = (cycle_t)vmi_timer_ops.get_cycle_counter(VMI_CYCLES_REAL);
|
||||
return ret >= clocksource_vmi.cycle_last ?
|
||||
ret : clocksource_vmi.cycle_last;
|
||||
}
|
||||
|
||||
static struct clocksource clocksource_vmi = {
|
||||
|
|
|
@ -207,7 +207,7 @@ static int __pit_timer_fn(struct kvm_kpit_state *ps)
|
|||
hrtimer_add_expires_ns(&pt->timer, pt->period);
|
||||
pt->scheduled = hrtimer_get_expires_ns(&pt->timer);
|
||||
if (pt->period)
|
||||
ps->channels[0].count_load_time = hrtimer_get_expires(&pt->timer);
|
||||
ps->channels[0].count_load_time = ktime_get();
|
||||
|
||||
return (pt->period == 0 ? 0 : 1);
|
||||
}
|
||||
|
|
|
@ -87,13 +87,6 @@ void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(kvm_inject_pending_timer_irqs);
|
||||
|
||||
void kvm_timer_intr_post(struct kvm_vcpu *vcpu, int vec)
|
||||
{
|
||||
kvm_apic_timer_intr_post(vcpu, vec);
|
||||
/* TODO: PIT, RTC etc. */
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(kvm_timer_intr_post);
|
||||
|
||||
void __kvm_migrate_timers(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
__kvm_migrate_apic_timer(vcpu);
|
||||
|
|
|
@ -89,7 +89,6 @@ static inline int irqchip_in_kernel(struct kvm *kvm)
|
|||
|
||||
void kvm_pic_reset(struct kvm_kpic_state *s);
|
||||
|
||||
void kvm_timer_intr_post(struct kvm_vcpu *vcpu, int vec);
|
||||
void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu);
|
||||
void kvm_inject_apic_timer_irqs(struct kvm_vcpu *vcpu);
|
||||
void kvm_apic_nmi_wd_deliver(struct kvm_vcpu *vcpu);
|
||||
|
|
|
@ -35,6 +35,12 @@
|
|||
#include "kvm_cache_regs.h"
|
||||
#include "irq.h"
|
||||
|
||||
#ifndef CONFIG_X86_64
|
||||
#define mod_64(x, y) ((x) - (y) * div64_u64(x, y))
|
||||
#else
|
||||
#define mod_64(x, y) ((x) % (y))
|
||||
#endif
|
||||
|
||||
#define PRId64 "d"
|
||||
#define PRIx64 "llx"
|
||||
#define PRIu64 "u"
|
||||
|
@ -511,52 +517,22 @@ static void apic_send_ipi(struct kvm_lapic *apic)
|
|||
|
||||
static u32 apic_get_tmcct(struct kvm_lapic *apic)
|
||||
{
|
||||
u64 counter_passed;
|
||||
ktime_t passed, now;
|
||||
ktime_t remaining;
|
||||
s64 ns;
|
||||
u32 tmcct;
|
||||
|
||||
ASSERT(apic != NULL);
|
||||
|
||||
now = apic->timer.dev.base->get_time();
|
||||
tmcct = apic_get_reg(apic, APIC_TMICT);
|
||||
|
||||
/* if initial count is 0, current count should also be 0 */
|
||||
if (tmcct == 0)
|
||||
if (apic_get_reg(apic, APIC_TMICT) == 0)
|
||||
return 0;
|
||||
|
||||
if (unlikely(ktime_to_ns(now) <=
|
||||
ktime_to_ns(apic->timer.last_update))) {
|
||||
/* Wrap around */
|
||||
passed = ktime_add(( {
|
||||
(ktime_t) {
|
||||
.tv64 = KTIME_MAX -
|
||||
(apic->timer.last_update).tv64}; }
|
||||
), now);
|
||||
apic_debug("time elapsed\n");
|
||||
} else
|
||||
passed = ktime_sub(now, apic->timer.last_update);
|
||||
remaining = hrtimer_expires_remaining(&apic->timer.dev);
|
||||
if (ktime_to_ns(remaining) < 0)
|
||||
remaining = ktime_set(0, 0);
|
||||
|
||||
counter_passed = div64_u64(ktime_to_ns(passed),
|
||||
(APIC_BUS_CYCLE_NS * apic->timer.divide_count));
|
||||
|
||||
if (counter_passed > tmcct) {
|
||||
if (unlikely(!apic_lvtt_period(apic))) {
|
||||
/* one-shot timers stick at 0 until reset */
|
||||
tmcct = 0;
|
||||
} else {
|
||||
/*
|
||||
* periodic timers reset to APIC_TMICT when they
|
||||
* hit 0. The while loop simulates this happening N
|
||||
* times. (counter_passed %= tmcct) would also work,
|
||||
* but might be slower or not work on 32-bit??
|
||||
*/
|
||||
while (counter_passed > tmcct)
|
||||
counter_passed -= tmcct;
|
||||
tmcct -= counter_passed;
|
||||
}
|
||||
} else {
|
||||
tmcct -= counter_passed;
|
||||
}
|
||||
ns = mod_64(ktime_to_ns(remaining), apic->timer.period);
|
||||
tmcct = div64_u64(ns, (APIC_BUS_CYCLE_NS * apic->timer.divide_count));
|
||||
|
||||
return tmcct;
|
||||
}
|
||||
|
@ -653,8 +629,6 @@ static void start_apic_timer(struct kvm_lapic *apic)
|
|||
{
|
||||
ktime_t now = apic->timer.dev.base->get_time();
|
||||
|
||||
apic->timer.last_update = now;
|
||||
|
||||
apic->timer.period = apic_get_reg(apic, APIC_TMICT) *
|
||||
APIC_BUS_CYCLE_NS * apic->timer.divide_count;
|
||||
atomic_set(&apic->timer.pending, 0);
|
||||
|
@ -1110,16 +1084,6 @@ void kvm_inject_apic_timer_irqs(struct kvm_vcpu *vcpu)
|
|||
}
|
||||
}
|
||||
|
||||
void kvm_apic_timer_intr_post(struct kvm_vcpu *vcpu, int vec)
|
||||
{
|
||||
struct kvm_lapic *apic = vcpu->arch.apic;
|
||||
|
||||
if (apic && apic_lvt_vector(apic, APIC_LVTT) == vec)
|
||||
apic->timer.last_update = ktime_add_ns(
|
||||
apic->timer.last_update,
|
||||
apic->timer.period);
|
||||
}
|
||||
|
||||
int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
int vector = kvm_apic_has_interrupt(vcpu);
|
||||
|
|
|
@ -12,7 +12,6 @@ struct kvm_lapic {
|
|||
atomic_t pending;
|
||||
s64 period; /* unit: ns */
|
||||
u32 divide_count;
|
||||
ktime_t last_update;
|
||||
struct hrtimer dev;
|
||||
} timer;
|
||||
struct kvm_vcpu *vcpu;
|
||||
|
@ -42,7 +41,6 @@ void kvm_set_apic_base(struct kvm_vcpu *vcpu, u64 data);
|
|||
void kvm_apic_post_state_restore(struct kvm_vcpu *vcpu);
|
||||
int kvm_lapic_enabled(struct kvm_vcpu *vcpu);
|
||||
int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu);
|
||||
void kvm_apic_timer_intr_post(struct kvm_vcpu *vcpu, int vec);
|
||||
|
||||
void kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr);
|
||||
void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu);
|
||||
|
|
|
@ -1698,8 +1698,13 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *shadow_pte,
|
|||
if (largepage)
|
||||
spte |= PT_PAGE_SIZE_MASK;
|
||||
if (mt_mask) {
|
||||
mt_mask = get_memory_type(vcpu, gfn) <<
|
||||
kvm_x86_ops->get_mt_mask_shift();
|
||||
if (!kvm_is_mmio_pfn(pfn)) {
|
||||
mt_mask = get_memory_type(vcpu, gfn) <<
|
||||
kvm_x86_ops->get_mt_mask_shift();
|
||||
mt_mask |= VMX_EPT_IGMT_BIT;
|
||||
} else
|
||||
mt_mask = MTRR_TYPE_UNCACHABLE <<
|
||||
kvm_x86_ops->get_mt_mask_shift();
|
||||
spte |= mt_mask;
|
||||
}
|
||||
|
||||
|
|
|
@ -1600,7 +1600,6 @@ static void svm_intr_assist(struct kvm_vcpu *vcpu)
|
|||
/* Okay, we can deliver the interrupt: grab it and update PIC state. */
|
||||
intr_vector = kvm_cpu_get_interrupt(vcpu);
|
||||
svm_inject_irq(svm, intr_vector);
|
||||
kvm_timer_intr_post(vcpu, intr_vector);
|
||||
out:
|
||||
update_cr8_intercept(vcpu);
|
||||
}
|
||||
|
|
|
@ -903,6 +903,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 *pdata)
|
|||
data = vmcs_readl(GUEST_SYSENTER_ESP);
|
||||
break;
|
||||
default:
|
||||
vmx_load_host_state(to_vmx(vcpu));
|
||||
msr = find_msr_entry(to_vmx(vcpu), msr_index);
|
||||
if (msr) {
|
||||
data = msr->data;
|
||||
|
@ -3285,7 +3286,6 @@ static void vmx_intr_assist(struct kvm_vcpu *vcpu)
|
|||
}
|
||||
if (vcpu->arch.interrupt.pending) {
|
||||
vmx_inject_irq(vcpu, vcpu->arch.interrupt.nr);
|
||||
kvm_timer_intr_post(vcpu, vcpu->arch.interrupt.nr);
|
||||
if (kvm_cpu_has_interrupt(vcpu))
|
||||
enable_irq_window(vcpu);
|
||||
}
|
||||
|
@ -3687,8 +3687,7 @@ static int __init vmx_init(void)
|
|||
if (vm_need_ept()) {
|
||||
bypass_guest_pf = 0;
|
||||
kvm_mmu_set_base_ptes(VMX_EPT_READABLE_MASK |
|
||||
VMX_EPT_WRITABLE_MASK |
|
||||
VMX_EPT_IGMT_BIT);
|
||||
VMX_EPT_WRITABLE_MASK);
|
||||
kvm_mmu_set_mask_ptes(0ull, 0ull, 0ull, 0ull,
|
||||
VMX_EPT_EXECUTABLE_MASK,
|
||||
VMX_EPT_DEFAULT_MT << VMX_EPT_MT_EPTE_SHIFT);
|
||||
|
|
|
@ -967,7 +967,6 @@ int kvm_dev_ioctl_check_extension(long ext)
|
|||
case KVM_CAP_MMU_SHADOW_CACHE_CONTROL:
|
||||
case KVM_CAP_SET_TSS_ADDR:
|
||||
case KVM_CAP_EXT_CPUID:
|
||||
case KVM_CAP_CLOCKSOURCE:
|
||||
case KVM_CAP_PIT:
|
||||
case KVM_CAP_NOP_IO_DELAY:
|
||||
case KVM_CAP_MP_STATE:
|
||||
|
@ -992,6 +991,9 @@ int kvm_dev_ioctl_check_extension(long ext)
|
|||
case KVM_CAP_IOMMU:
|
||||
r = iommu_found();
|
||||
break;
|
||||
case KVM_CAP_CLOCKSOURCE:
|
||||
r = boot_cpu_has(X86_FEATURE_CONSTANT_TSC);
|
||||
break;
|
||||
default:
|
||||
r = 0;
|
||||
break;
|
||||
|
@ -4127,9 +4129,13 @@ static void kvm_free_vcpus(struct kvm *kvm)
|
|||
|
||||
}
|
||||
|
||||
void kvm_arch_destroy_vm(struct kvm *kvm)
|
||||
void kvm_arch_sync_events(struct kvm *kvm)
|
||||
{
|
||||
kvm_free_all_assigned_devices(kvm);
|
||||
}
|
||||
|
||||
void kvm_arch_destroy_vm(struct kvm *kvm)
|
||||
{
|
||||
kvm_iommu_unmap_guest(kvm);
|
||||
kvm_free_pit(kvm);
|
||||
kfree(kvm->arch.vpic);
|
||||
|
|
|
@ -134,25 +134,6 @@ int page_is_ram(unsigned long pagenr)
|
|||
return 0;
|
||||
}
|
||||
|
||||
int pagerange_is_ram(unsigned long start, unsigned long end)
|
||||
{
|
||||
int ram_page = 0, not_rampage = 0;
|
||||
unsigned long page_nr;
|
||||
|
||||
for (page_nr = (start >> PAGE_SHIFT); page_nr < (end >> PAGE_SHIFT);
|
||||
++page_nr) {
|
||||
if (page_is_ram(page_nr))
|
||||
ram_page = 1;
|
||||
else
|
||||
not_rampage = 1;
|
||||
|
||||
if (ram_page == not_rampage)
|
||||
return -1;
|
||||
}
|
||||
|
||||
return ram_page;
|
||||
}
|
||||
|
||||
/*
|
||||
* Fix up the linear direct mapping of the kernel to avoid cache attribute
|
||||
* conflicts.
|
||||
|
|
|
@ -145,7 +145,7 @@ int __init compute_hash_shift(struct bootnode *nodes, int numnodes,
|
|||
return shift;
|
||||
}
|
||||
|
||||
int early_pfn_to_nid(unsigned long pfn)
|
||||
int __meminit __early_pfn_to_nid(unsigned long pfn)
|
||||
{
|
||||
return phys_to_nid(pfn << PAGE_SHIFT);
|
||||
}
|
||||
|
|
|
@ -508,18 +508,13 @@ static int split_large_page(pte_t *kpte, unsigned long address)
|
|||
#endif
|
||||
|
||||
/*
|
||||
* Install the new, split up pagetable. Important details here:
|
||||
* Install the new, split up pagetable.
|
||||
*
|
||||
* On Intel the NX bit of all levels must be cleared to make a
|
||||
* page executable. See section 4.13.2 of Intel 64 and IA-32
|
||||
* Architectures Software Developer's Manual).
|
||||
*
|
||||
* Mark the entry present. The current mapping might be
|
||||
* set to not present, which we preserved above.
|
||||
* We use the standard kernel pagetable protections for the new
|
||||
* pagetable protections, the actual ptes set above control the
|
||||
* primary protection behavior:
|
||||
*/
|
||||
ref_prot = pte_pgprot(pte_mkexec(pte_clrhuge(*kpte)));
|
||||
pgprot_val(ref_prot) |= _PAGE_PRESENT;
|
||||
__set_pmd_pte(kpte, address, mk_pte(base, ref_prot));
|
||||
__set_pmd_pte(kpte, address, mk_pte(base, __pgprot(_KERNPG_TABLE)));
|
||||
base = NULL;
|
||||
|
||||
out_unlock:
|
||||
|
@ -575,7 +570,6 @@ static int __change_page_attr(struct cpa_data *cpa, int primary)
|
|||
address = cpa->vaddr[cpa->curpage];
|
||||
else
|
||||
address = *cpa->vaddr;
|
||||
|
||||
repeat:
|
||||
kpte = lookup_address(address, &level);
|
||||
if (!kpte)
|
||||
|
@ -812,6 +806,13 @@ static int change_page_attr_set_clr(unsigned long *addr, int numpages,
|
|||
|
||||
vm_unmap_aliases();
|
||||
|
||||
/*
|
||||
* If we're called with lazy mmu updates enabled, the
|
||||
* in-memory pte state may be stale. Flush pending updates to
|
||||
* bring them up to date.
|
||||
*/
|
||||
arch_flush_lazy_mmu_mode();
|
||||
|
||||
cpa.vaddr = addr;
|
||||
cpa.numpages = numpages;
|
||||
cpa.mask_set = mask_set;
|
||||
|
@ -854,6 +855,13 @@ static int change_page_attr_set_clr(unsigned long *addr, int numpages,
|
|||
} else
|
||||
cpa_flush_all(cache);
|
||||
|
||||
/*
|
||||
* If we've been called with lazy mmu updates enabled, then
|
||||
* make sure that everything gets flushed out before we
|
||||
* return.
|
||||
*/
|
||||
arch_flush_lazy_mmu_mode();
|
||||
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -211,6 +211,33 @@ chk_conflict(struct memtype *new, struct memtype *entry, unsigned long *type)
|
|||
static struct memtype *cached_entry;
|
||||
static u64 cached_start;
|
||||
|
||||
static int pat_pagerange_is_ram(unsigned long start, unsigned long end)
|
||||
{
|
||||
int ram_page = 0, not_rampage = 0;
|
||||
unsigned long page_nr;
|
||||
|
||||
for (page_nr = (start >> PAGE_SHIFT); page_nr < (end >> PAGE_SHIFT);
|
||||
++page_nr) {
|
||||
/*
|
||||
* For legacy reasons, physical address range in the legacy ISA
|
||||
* region is tracked as non-RAM. This will allow users of
|
||||
* /dev/mem to map portions of legacy ISA region, even when
|
||||
* some of those portions are listed(or not even listed) with
|
||||
* different e820 types(RAM/reserved/..)
|
||||
*/
|
||||
if (page_nr >= (ISA_END_ADDRESS >> PAGE_SHIFT) &&
|
||||
page_is_ram(page_nr))
|
||||
ram_page = 1;
|
||||
else
|
||||
not_rampage = 1;
|
||||
|
||||
if (ram_page == not_rampage)
|
||||
return -1;
|
||||
}
|
||||
|
||||
return ram_page;
|
||||
}
|
||||
|
||||
/*
|
||||
* For RAM pages, mark the pages as non WB memory type using
|
||||
* PageNonWB (PG_arch_1). We allow only one set_memory_uc() or
|
||||
|
@ -336,20 +363,12 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type,
|
|||
if (new_type)
|
||||
*new_type = actual_type;
|
||||
|
||||
/*
|
||||
* For legacy reasons, some parts of the physical address range in the
|
||||
* legacy 1MB region is treated as non-RAM (even when listed as RAM in
|
||||
* the e820 tables). So we will track the memory attributes of this
|
||||
* legacy 1MB region using the linear memtype_list always.
|
||||
*/
|
||||
if (end >= ISA_END_ADDRESS) {
|
||||
is_range_ram = pagerange_is_ram(start, end);
|
||||
if (is_range_ram == 1)
|
||||
return reserve_ram_pages_type(start, end, req_type,
|
||||
new_type);
|
||||
else if (is_range_ram < 0)
|
||||
return -EINVAL;
|
||||
}
|
||||
is_range_ram = pat_pagerange_is_ram(start, end);
|
||||
if (is_range_ram == 1)
|
||||
return reserve_ram_pages_type(start, end, req_type,
|
||||
new_type);
|
||||
else if (is_range_ram < 0)
|
||||
return -EINVAL;
|
||||
|
||||
new = kmalloc(sizeof(struct memtype), GFP_KERNEL);
|
||||
if (!new)
|
||||
|
@ -446,19 +465,11 @@ int free_memtype(u64 start, u64 end)
|
|||
if (is_ISA_range(start, end - 1))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* For legacy reasons, some parts of the physical address range in the
|
||||
* legacy 1MB region is treated as non-RAM (even when listed as RAM in
|
||||
* the e820 tables). So we will track the memory attributes of this
|
||||
* legacy 1MB region using the linear memtype_list always.
|
||||
*/
|
||||
if (end >= ISA_END_ADDRESS) {
|
||||
is_range_ram = pagerange_is_ram(start, end);
|
||||
if (is_range_ram == 1)
|
||||
return free_ram_pages_type(start, end);
|
||||
else if (is_range_ram < 0)
|
||||
return -EINVAL;
|
||||
}
|
||||
is_range_ram = pat_pagerange_is_ram(start, end);
|
||||
if (is_range_ram == 1)
|
||||
return free_ram_pages_type(start, end);
|
||||
else if (is_range_ram < 0)
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock(&memtype_lock);
|
||||
list_for_each_entry(entry, &memtype_list, nd) {
|
||||
|
@ -626,17 +637,13 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
|
|||
unsigned long flags;
|
||||
unsigned long want_flags = (pgprot_val(*vma_prot) & _PAGE_CACHE_MASK);
|
||||
|
||||
is_ram = pagerange_is_ram(paddr, paddr + size);
|
||||
is_ram = pat_pagerange_is_ram(paddr, paddr + size);
|
||||
|
||||
if (is_ram != 0) {
|
||||
/*
|
||||
* For mapping RAM pages, drivers need to call
|
||||
* set_memory_[uc|wc|wb] directly, for reserve and free, before
|
||||
* setting up the PTE.
|
||||
*/
|
||||
WARN_ON_ONCE(1);
|
||||
return 0;
|
||||
}
|
||||
/*
|
||||
* reserve_pfn_range() doesn't support RAM pages.
|
||||
*/
|
||||
if (is_ram != 0)
|
||||
return -EINVAL;
|
||||
|
||||
ret = reserve_memtype(paddr, paddr + size, want_flags, &flags);
|
||||
if (ret)
|
||||
|
@ -693,7 +700,7 @@ static void free_pfn_range(u64 paddr, unsigned long size)
|
|||
{
|
||||
int is_ram;
|
||||
|
||||
is_ram = pagerange_is_ram(paddr, paddr + size);
|
||||
is_ram = pat_pagerange_is_ram(paddr, paddr + size);
|
||||
if (is_ram == 0)
|
||||
free_memtype(paddr, paddr + size);
|
||||
}
|
||||
|
|
|
@ -209,12 +209,19 @@ void blk_abort_queue(struct request_queue *q)
|
|||
{
|
||||
unsigned long flags;
|
||||
struct request *rq, *tmp;
|
||||
LIST_HEAD(list);
|
||||
|
||||
spin_lock_irqsave(q->queue_lock, flags);
|
||||
|
||||
elv_abort_queue(q);
|
||||
|
||||
list_for_each_entry_safe(rq, tmp, &q->timeout_list, timeout_list)
|
||||
/*
|
||||
* Splice entries to local list, to avoid deadlocking if entries
|
||||
* get readded to the timeout list by error handling
|
||||
*/
|
||||
list_splice_init(&q->timeout_list, &list);
|
||||
|
||||
list_for_each_entry_safe(rq, tmp, &list, timeout_list)
|
||||
blk_abort_request(rq);
|
||||
|
||||
spin_unlock_irqrestore(q->queue_lock, flags);
|
||||
|
|
|
@ -142,7 +142,7 @@ static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes,
|
|||
|
||||
what |= ddir_act[rw & WRITE];
|
||||
what |= MASK_TC_BIT(rw, BARRIER);
|
||||
what |= MASK_TC_BIT(rw, SYNC);
|
||||
what |= MASK_TC_BIT(rw, SYNCIO);
|
||||
what |= MASK_TC_BIT(rw, AHEAD);
|
||||
what |= MASK_TC_BIT(rw, META);
|
||||
what |= MASK_TC_BIT(rw, DISCARD);
|
||||
|
|
17
block/bsg.c
17
block/bsg.c
|
@ -244,7 +244,8 @@ bsg_validate_sgv4_hdr(struct request_queue *q, struct sg_io_v4 *hdr, int *rw)
|
|||
* map sg_io_v4 to a request.
|
||||
*/
|
||||
static struct request *
|
||||
bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm)
|
||||
bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm,
|
||||
u8 *sense)
|
||||
{
|
||||
struct request_queue *q = bd->queue;
|
||||
struct request *rq, *next_rq = NULL;
|
||||
|
@ -306,6 +307,10 @@ bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm)
|
|||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
|
||||
rq->sense = sense;
|
||||
rq->sense_len = 0;
|
||||
|
||||
return rq;
|
||||
out:
|
||||
if (rq->cmd != rq->__cmd)
|
||||
|
@ -348,9 +353,6 @@ static void bsg_rq_end_io(struct request *rq, int uptodate)
|
|||
static void bsg_add_command(struct bsg_device *bd, struct request_queue *q,
|
||||
struct bsg_command *bc, struct request *rq)
|
||||
{
|
||||
rq->sense = bc->sense;
|
||||
rq->sense_len = 0;
|
||||
|
||||
/*
|
||||
* add bc command to busy queue and submit rq for io
|
||||
*/
|
||||
|
@ -419,7 +421,7 @@ static int blk_complete_sgv4_hdr_rq(struct request *rq, struct sg_io_v4 *hdr,
|
|||
{
|
||||
int ret = 0;
|
||||
|
||||
dprintk("rq %p bio %p %u\n", rq, bio, rq->errors);
|
||||
dprintk("rq %p bio %p 0x%x\n", rq, bio, rq->errors);
|
||||
/*
|
||||
* fill in all the output members
|
||||
*/
|
||||
|
@ -635,7 +637,7 @@ static int __bsg_write(struct bsg_device *bd, const char __user *buf,
|
|||
/*
|
||||
* get a request, fill in the blanks, and add to request queue
|
||||
*/
|
||||
rq = bsg_map_hdr(bd, &bc->hdr, has_write_perm);
|
||||
rq = bsg_map_hdr(bd, &bc->hdr, has_write_perm, bc->sense);
|
||||
if (IS_ERR(rq)) {
|
||||
ret = PTR_ERR(rq);
|
||||
rq = NULL;
|
||||
|
@ -922,11 +924,12 @@ static long bsg_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
|||
struct request *rq;
|
||||
struct bio *bio, *bidi_bio = NULL;
|
||||
struct sg_io_v4 hdr;
|
||||
u8 sense[SCSI_SENSE_BUFFERSIZE];
|
||||
|
||||
if (copy_from_user(&hdr, uarg, sizeof(hdr)))
|
||||
return -EFAULT;
|
||||
|
||||
rq = bsg_map_hdr(bd, &hdr, file->f_mode & FMODE_WRITE);
|
||||
rq = bsg_map_hdr(bd, &hdr, file->f_mode & FMODE_WRITE, sense);
|
||||
if (IS_ERR(rq))
|
||||
return PTR_ERR(rq);
|
||||
|
||||
|
|
|
@ -1087,6 +1087,14 @@ dev_t blk_lookup_devt(const char *name, int partno)
|
|||
if (strcmp(dev_name(dev), name))
|
||||
continue;
|
||||
|
||||
if (partno < disk->minors) {
|
||||
/* We need to return the right devno, even
|
||||
* if the partition doesn't exist yet.
|
||||
*/
|
||||
devt = MKDEV(MAJOR(dev->devt),
|
||||
MINOR(dev->devt) + partno);
|
||||
break;
|
||||
}
|
||||
part = disk_get_part(disk, partno);
|
||||
if (part) {
|
||||
devt = part_devt(part);
|
||||
|
|
|
@ -45,7 +45,13 @@ struct priv {
|
|||
|
||||
static inline void setbit128_bbe(void *b, int bit)
|
||||
{
|
||||
__set_bit(bit ^ 0x78, b);
|
||||
__set_bit(bit ^ (0x80 -
|
||||
#ifdef __BIG_ENDIAN
|
||||
BITS_PER_LONG
|
||||
#else
|
||||
BITS_PER_BYTE
|
||||
#endif
|
||||
), b);
|
||||
}
|
||||
|
||||
static int setkey(struct crypto_tfm *parent, const u8 *key,
|
||||
|
|
|
@ -254,13 +254,6 @@ config ACPI_PCI_SLOT
|
|||
help you correlate PCI bus addresses with the physical geography
|
||||
of your slots. If you are unsure, say N.
|
||||
|
||||
config ACPI_SYSTEM
|
||||
bool
|
||||
default y
|
||||
help
|
||||
This driver will enable your system to shut down using ACPI, and
|
||||
dump your ACPI DSDT table using /proc/acpi/dsdt.
|
||||
|
||||
config X86_PM_TIMER
|
||||
bool "Power Management Timer Support" if EMBEDDED
|
||||
depends on X86
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue