2005-04-16 16:20:36 -06:00
|
|
|
#
|
|
|
|
# Makefile for the linux kernel.
|
|
|
|
#
|
|
|
|
|
2008-07-25 02:45:35 -06:00
|
|
|
obj-y = sched.o fork.o exec_domain.o panic.o printk.o \
|
2008-05-29 12:17:02 -06:00
|
|
|
cpu.o exit.o itimer.o time.o softirq.o resource.o \
|
2008-02-08 05:18:23 -07:00
|
|
|
sysctl.o capability.o ptrace.o timer.o user.o \
|
2005-04-16 16:20:36 -06:00
|
|
|
signal.o sys.o kmod.o workqueue.o pid.o \
|
2008-01-25 13:08:24 -07:00
|
|
|
rcupdate.o extable.o params.o posix-timers.o \
|
2006-01-09 21:52:32 -07:00
|
|
|
kthread.o wait.o kfifo.o sys_ni.o posix-cpu-timers.o mutex.o \
|
2008-03-07 19:55:58 -07:00
|
|
|
hrtimer.o rwsem.o nsproxy.o srcu.o semaphore.o \
|
2008-05-03 10:29:28 -06:00
|
|
|
notifier.o ksysfs.o pm_qos_params.o sched_clock.o
|
2005-04-16 16:20:36 -06:00
|
|
|
|
2008-07-18 00:59:24 -06:00
|
|
|
CFLAGS_REMOVE_sched.o = -mno-spe
|
|
|
|
|
2008-05-12 13:20:55 -06:00
|
|
|
ifdef CONFIG_FTRACE
|
2008-05-14 19:30:30 -06:00
|
|
|
# Do not trace debug files and internal ftrace files
|
|
|
|
CFLAGS_REMOVE_lockdep.o = -pg
|
|
|
|
CFLAGS_REMOVE_lockdep_proc.o = -pg
|
|
|
|
CFLAGS_REMOVE_mutex-debug.o = -pg
|
|
|
|
CFLAGS_REMOVE_rtmutex-debug.o = -pg
|
|
|
|
CFLAGS_REMOVE_cgroup-debug.o = -pg
|
|
|
|
CFLAGS_REMOVE_sched_clock.o = -pg
|
2008-04-15 14:39:31 -06:00
|
|
|
CFLAGS_REMOVE_sched.o = -mno-spe -pg
|
2008-05-12 13:20:55 -06:00
|
|
|
endif
|
|
|
|
|
2008-07-25 02:45:35 -06:00
|
|
|
obj-$(CONFIG_PROFILING) += profile.o
|
2008-04-29 02:02:36 -06:00
|
|
|
obj-$(CONFIG_SYSCTL_SYSCALL_CHECK) += sysctl_check.o
|
2006-07-03 01:24:38 -06:00
|
|
|
obj-$(CONFIG_STACKTRACE) += stacktrace.o
|
2006-06-26 01:25:06 -06:00
|
|
|
obj-y += time/
|
2006-01-09 16:59:20 -07:00
|
|
|
obj-$(CONFIG_DEBUG_MUTEXES) += mutex-debug.o
|
[PATCH] lockdep: core
Do 'make oldconfig' and accept all the defaults for new config options -
reboot into the kernel and if everything goes well it should boot up fine and
you should have /proc/lockdep and /proc/lockdep_stats files.
Typically if the lock validator finds some problem it will print out
voluminous debug output that begins with "BUG: ..." and which syslog output
can be used by kernel developers to figure out the precise locking scenario.
What does the lock validator do? It "observes" and maps all locking rules as
they occur dynamically (as triggered by the kernel's natural use of spinlocks,
rwlocks, mutexes and rwsems). Whenever the lock validator subsystem detects a
new locking scenario, it validates this new rule against the existing set of
rules. If this new rule is consistent with the existing set of rules then the
new rule is added transparently and the kernel continues as normal. If the
new rule could create a deadlock scenario then this condition is printed out.
When determining validity of locking, all possible "deadlock scenarios" are
considered: assuming arbitrary number of CPUs, arbitrary irq context and task
context constellations, running arbitrary combinations of all the existing
locking scenarios. In a typical system this means millions of separate
scenarios. This is why we call it a "locking correctness" validator - for all
rules that are observed the lock validator proves it with mathematical
certainty that a deadlock could not occur (assuming that the lock validator
implementation itself is correct and its internal data structures are not
corrupted by some other kernel subsystem). [see more details and conditionals
of this statement in include/linux/lockdep.h and
Documentation/lockdep-design.txt]
Furthermore, this "all possible scenarios" property of the validator also
enables the finding of complex, highly unlikely multi-CPU multi-context races
via single single-context rules, increasing the likelyhood of finding bugs
drastically. In practical terms: the lock validator already found a bug in
the upstream kernel that could only occur on systems with 3 or more CPUs, and
which needed 3 very unlikely code sequences to occur at once on the 3 CPUs.
That bug was found and reported on a single-CPU system (!). So in essence a
race will be found "piecemail-wise", triggering all the necessary components
for the race, without having to reproduce the race scenario itself! In its
short existence the lock validator found and reported many bugs before they
actually caused a real deadlock.
To further increase the efficiency of the validator, the mapping is not per
"lock instance", but per "lock-class". For example, all struct inode objects
in the kernel have inode->inotify_mutex. If there are 10,000 inodes cached,
then there are 10,000 lock objects. But ->inotify_mutex is a single "lock
type", and all locking activities that occur against ->inotify_mutex are
"unified" into this single lock-class. The advantage of the lock-class
approach is that all historical ->inotify_mutex uses are mapped into a single
(and as narrow as possible) set of locking rules - regardless of how many
different tasks or inode structures it took to build this set of rules. The
set of rules persist during the lifetime of the kernel.
To see the rough magnitude of checking that the lock validator does, here's a
portion of /proc/lockdep_stats, fresh after bootup:
lock-classes: 694 [max: 2048]
direct dependencies: 1598 [max: 8192]
indirect dependencies: 17896
all direct dependencies: 16206
dependency chains: 1910 [max: 8192]
in-hardirq chains: 17
in-softirq chains: 105
in-process chains: 1065
stack-trace entries: 38761 [max: 131072]
combined max dependencies: 2033928
hardirq-safe locks: 24
hardirq-unsafe locks: 176
softirq-safe locks: 53
softirq-unsafe locks: 137
irq-safe locks: 59
irq-unsafe locks: 176
The lock validator has observed 1598 actual single-thread locking patterns,
and has validated all possible 2033928 distinct locking scenarios.
More details about the design of the lock validator can be found in
Documentation/lockdep-design.txt, which can also found at:
http://redhat.com/~mingo/lockdep-patches/lockdep-design.txt
[bunk@stusta.de: cleanups]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-03 01:24:50 -06:00
|
|
|
obj-$(CONFIG_LOCKDEP) += lockdep.o
|
2006-07-03 01:24:52 -06:00
|
|
|
ifeq ($(CONFIG_PROC_FS),y)
|
|
|
|
obj-$(CONFIG_LOCKDEP) += lockdep_proc.o
|
|
|
|
endif
|
2005-04-16 16:20:36 -06:00
|
|
|
obj-$(CONFIG_FUTEX) += futex.o
|
2006-03-27 02:16:24 -07:00
|
|
|
ifeq ($(CONFIG_COMPAT),y)
|
|
|
|
obj-$(CONFIG_FUTEX) += futex_compat.o
|
|
|
|
endif
|
2006-06-27 03:54:53 -06:00
|
|
|
obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
|
2006-06-27 03:54:55 -06:00
|
|
|
obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
|
2006-06-27 03:54:56 -06:00
|
|
|
obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o
|
2005-04-16 16:20:36 -06:00
|
|
|
obj-$(CONFIG_GENERIC_ISA_DMA) += dma.o
|
2008-06-26 03:21:34 -06:00
|
|
|
obj-$(CONFIG_USE_GENERIC_SMP_HELPERS) += smp.o
|
2008-05-29 12:17:02 -06:00
|
|
|
obj-$(CONFIG_SMP) += spinlock.o
|
[PATCH] spinlock consolidation
This patch (written by me and also containing many suggestions of Arjan van
de Ven) does a major cleanup of the spinlock code. It does the following
things:
- consolidates and enhances the spinlock/rwlock debugging code
- simplifies the asm/spinlock.h files
- encapsulates the raw spinlock type and moves generic spinlock
features (such as ->break_lock) into the generic code.
- cleans up the spinlock code hierarchy to get rid of the spaghetti.
Most notably there's now only a single variant of the debugging code,
located in lib/spinlock_debug.c. (previously we had one SMP debugging
variant per architecture, plus a separate generic one for UP builds)
Also, i've enhanced the rwlock debugging facility, it will now track
write-owners. There is new spinlock-owner/CPU-tracking on SMP builds too.
All locks have lockup detection now, which will work for both soft and hard
spin/rwlock lockups.
The arch-level include files now only contain the minimally necessary
subset of the spinlock code - all the rest that can be generalized now
lives in the generic headers:
include/asm-i386/spinlock_types.h | 16
include/asm-x86_64/spinlock_types.h | 16
I have also split up the various spinlock variants into separate files,
making it easier to see which does what. The new layout is:
SMP | UP
----------------------------|-----------------------------------
asm/spinlock_types_smp.h | linux/spinlock_types_up.h
linux/spinlock_types.h | linux/spinlock_types.h
asm/spinlock_smp.h | linux/spinlock_up.h
linux/spinlock_api_smp.h | linux/spinlock_api_up.h
linux/spinlock.h | linux/spinlock.h
/*
* here's the role of the various spinlock/rwlock related include files:
*
* on SMP builds:
*
* asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
* initializers
*
* linux/spinlock_types.h:
* defines the generic type and initializers
*
* asm/spinlock.h: contains the __raw_spin_*()/etc. lowlevel
* implementations, mostly inline assembly code
*
* (also included on UP-debug builds:)
*
* linux/spinlock_api_smp.h:
* contains the prototypes for the _spin_*() APIs.
*
* linux/spinlock.h: builds the final spin_*() APIs.
*
* on UP builds:
*
* linux/spinlock_type_up.h:
* contains the generic, simplified UP spinlock type.
* (which is an empty structure on non-debug builds)
*
* linux/spinlock_types.h:
* defines the generic type and initializers
*
* linux/spinlock_up.h:
* contains the __raw_spin_*()/etc. version of UP
* builds. (which are NOPs on non-debug, non-preempt
* builds)
*
* (included on UP-non-debug builds:)
*
* linux/spinlock_api_up.h:
* builds the _spin_*() APIs.
*
* linux/spinlock.h: builds the final spin_*() APIs.
*/
All SMP and UP architectures are converted by this patch.
arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
crosscompilers. m32r, mips, sh, sparc, have not been tested yet, but should
be mostly fine.
From: Grant Grundler <grundler@parisc-linux.org>
Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
Builds 32-bit SMP kernel (not booted or tested). I did not try to build
non-SMP kernels. That should be trivial to fix up later if necessary.
I converted bit ops atomic_hash lock to raw_spinlock_t. Doing so avoids
some ugly nesting of linux/*.h and asm/*.h files. Those particular locks
are well tested and contained entirely inside arch specific code. I do NOT
expect any new issues to arise with them.
If someone does ever need to use debug/metrics with them, then they will
need to unravel this hairball between spinlocks, atomic ops, and bit ops
that exist only because parisc has exactly one atomic instruction: LDCW
(load and clear word).
From: "Luck, Tony" <tony.luck@intel.com>
ia64 fix
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjanv@infradead.org>
Signed-off-by: Grant Grundler <grundler@parisc-linux.org>
Cc: Matthew Wilcox <willy@debian.org>
Signed-off-by: Hirokazu Takata <takata@linux-m32r.org>
Signed-off-by: Mikael Pettersson <mikpe@csd.uu.se>
Signed-off-by: Benoit Boissinot <benoit.boissinot@ens-lyon.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-10 01:25:56 -06:00
|
|
|
obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock.o
|
2006-07-03 01:24:54 -06:00
|
|
|
obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
|
2005-04-16 16:20:36 -06:00
|
|
|
obj-$(CONFIG_UID16) += uid16.o
|
|
|
|
obj-$(CONFIG_MODULES) += module.o
|
|
|
|
obj-$(CONFIG_KALLSYMS) += kallsyms.o
|
|
|
|
obj-$(CONFIG_PM) += power/
|
|
|
|
obj-$(CONFIG_BSD_PROCESS_ACCT) += acct.o
|
2005-06-25 15:57:52 -06:00
|
|
|
obj-$(CONFIG_KEXEC) += kexec.o
|
2008-01-30 05:33:08 -07:00
|
|
|
obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o
|
2005-04-16 16:20:36 -06:00
|
|
|
obj-$(CONFIG_COMPAT) += compat.o
|
Task Control Groups: basic task cgroup framework
Generic Process Control Groups
--------------------------
There have recently been various proposals floating around for
resource management/accounting and other task grouping subsystems in
the kernel, including ResGroups, User BeanCounters, NSProxy
cgroups, and others. These all need the basic abstraction of being
able to group together multiple processes in an aggregate, in order to
track/limit the resources permitted to those processes, or control
other behaviour of the processes, and all implement this grouping in
different ways.
This patchset provides a framework for tracking and grouping processes
into arbitrary "cgroups" and assigning arbitrary state to those
groupings, in order to control the behaviour of the cgroup as an
aggregate.
The intention is that the various resource management and
virtualization/cgroup efforts can also become task cgroup
clients, with the result that:
- the userspace APIs are (somewhat) normalised
- it's easier to test e.g. the ResGroups CPU controller in
conjunction with the BeanCounters memory controller, or use either of
them as the resource-control portion of a virtual server system.
- the additional kernel footprint of any of the competing resource
management systems is substantially reduced, since it doesn't need
to provide process grouping/containment, hence improving their
chances of getting into the kernel
This patch:
Add the main task cgroups framework - the cgroup filesystem, and the
basic structures for tracking membership and associating subsystem state
objects to tasks.
Signed-off-by: Paul Menage <menage@google.com>
Cc: Serge E. Hallyn <serue@us.ibm.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Paul Jackson <pj@sgi.com>
Cc: Kirill Korotaev <dev@openvz.org>
Cc: Herbert Poetzl <herbert@13thfloor.at>
Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
Cc: Cedric Le Goater <clg@fr.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-19 00:39:30 -06:00
|
|
|
obj-$(CONFIG_CGROUPS) += cgroup.o
|
2007-10-19 00:39:43 -06:00
|
|
|
obj-$(CONFIG_CGROUP_DEBUG) += cgroup_debug.o
|
2005-04-16 16:20:36 -06:00
|
|
|
obj-$(CONFIG_CPUSETS) += cpuset.o
|
2007-10-19 00:39:45 -06:00
|
|
|
obj-$(CONFIG_CGROUP_NS) += ns_cgroup.o
|
2008-02-08 05:18:23 -07:00
|
|
|
obj-$(CONFIG_UTS_NS) += utsname.o
|
|
|
|
obj-$(CONFIG_USER_NS) += user_namespace.o
|
2008-02-08 05:18:24 -07:00
|
|
|
obj-$(CONFIG_PID_NS) += pid_namespace.o
|
2005-04-16 16:20:36 -06:00
|
|
|
obj-$(CONFIG_IKCONFIG) += configs.o
|
2008-02-07 01:13:49 -07:00
|
|
|
obj-$(CONFIG_RESOURCE_COUNTERS) += res_counter.o
|
2005-04-16 16:20:36 -06:00
|
|
|
obj-$(CONFIG_STOP_MACHINE) += stop_machine.o
|
2008-01-30 05:32:53 -07:00
|
|
|
obj-$(CONFIG_KPROBES_SANITY_TEST) += test_kprobes.o
|
2005-12-15 11:33:52 -07:00
|
|
|
obj-$(CONFIG_AUDIT) += audit.o auditfilter.o
|
2005-04-16 16:20:36 -06:00
|
|
|
obj-$(CONFIG_AUDITSYSCALL) += auditsc.o
|
[PATCH] audit: watching subtrees
New kind of audit rule predicates: "object is visible in given subtree".
The part that can be sanely implemented, that is. Limitations:
* if you have hardlink from outside of tree, you'd better watch
it too (or just watch the object itself, obviously)
* if you mount something under a watched tree, tell audit
that new chunk should be added to watched subtrees
* if you umount something in a watched tree and it's still mounted
elsewhere, you will get matches on events happening there. New command
tells audit to recalculate the trees, trimming such sources of false
positives.
Note that it's _not_ about path - if something mounted in several places
(multiple mount, bindings, different namespaces, etc.), the match does
_not_ depend on which one we are using for access.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2007-07-22 06:04:18 -06:00
|
|
|
obj-$(CONFIG_AUDIT_TREE) += audit_tree.o
|
2005-04-16 16:20:36 -06:00
|
|
|
obj-$(CONFIG_KPROBES) += kprobes.o
|
2008-04-17 12:05:37 -06:00
|
|
|
obj-$(CONFIG_KGDB) += kgdb.o
|
2005-09-06 16:16:27 -06:00
|
|
|
obj-$(CONFIG_DETECT_SOFTLOCKUP) += softlockup.o
|
2005-04-16 16:20:36 -06:00
|
|
|
obj-$(CONFIG_GENERIC_HARDIRQS) += irq/
|
|
|
|
obj-$(CONFIG_SECCOMP) += seccomp.o
|
2005-10-30 16:03:12 -07:00
|
|
|
obj-$(CONFIG_RCU_TORTURE_TEST) += rcutorture.o
|
2008-01-25 13:08:24 -07:00
|
|
|
obj-$(CONFIG_CLASSIC_RCU) += rcuclassic.o
|
|
|
|
obj-$(CONFIG_PREEMPT_RCU) += rcupreempt.o
|
|
|
|
ifeq ($(CONFIG_PREEMPT_RCU),y)
|
|
|
|
obj-$(CONFIG_RCU_TRACE) += rcupreempt_trace.o
|
|
|
|
endif
|
2006-03-23 11:56:55 -07:00
|
|
|
obj-$(CONFIG_RELAY) += relay.o
|
2007-02-14 01:33:58 -07:00
|
|
|
obj-$(CONFIG_SYSCTL) += utsname_sysctl.o
|
2006-07-14 01:24:36 -06:00
|
|
|
obj-$(CONFIG_TASK_DELAY_ACCT) += delayacct.o
|
2006-10-01 00:28:55 -06:00
|
|
|
obj-$(CONFIG_TASKSTATS) += taskstats.o tsacct.o
|
2007-10-19 00:41:06 -06:00
|
|
|
obj-$(CONFIG_MARKERS) += marker.o
|
tracing: Kernel Tracepoints
Implementation of kernel tracepoints. Inspired from the Linux Kernel
Markers. Allows complete typing verification by declaring both tracing
statement inline functions and probe registration/unregistration static
inline functions within the same macro "DEFINE_TRACE". No format string
is required. See the tracepoint Documentation and Samples patches for
usage examples.
Taken from the documentation patch :
"A tracepoint placed in code provides a hook to call a function (probe)
that you can provide at runtime. A tracepoint can be "on" (a probe is
connected to it) or "off" (no probe is attached). When a tracepoint is
"off" it has no effect, except for adding a tiny time penalty (checking
a condition for a branch) and space penalty (adding a few bytes for the
function call at the end of the instrumented function and adds a data
structure in a separate section). When a tracepoint is "on", the
function you provide is called each time the tracepoint is executed, in
the execution context of the caller. When the function provided ends its
execution, it returns to the caller (continuing from the tracepoint
site).
You can put tracepoints at important locations in the code. They are
lightweight hooks that can pass an arbitrary number of parameters, which
prototypes are described in a tracepoint declaration placed in a header
file."
Addition and removal of tracepoints is synchronized by RCU using the
scheduler (and preempt_disable) as guarantees to find a quiescent state
(this is really RCU "classic"). The update side uses rcu_barrier_sched()
with call_rcu_sched() and the read/execute side uses
"preempt_disable()/preempt_enable()".
We make sure the previous array containing probes, which has been
scheduled for deletion by the rcu callback, is indeed freed before we
proceed to the next update. It therefore limits the rate of modification
of a single tracepoint to one update per RCU period. The objective here
is to permit fast batch add/removal of probes on _different_
tracepoints.
Changelog :
- Use #name ":" #proto as string to identify the tracepoint in the
tracepoint table. This will make sure not type mismatch happens due to
connexion of a probe with the wrong type to a tracepoint declared with
the same name in a different header.
- Add tracepoint_entry_free_old.
- Change __TO_TRACE to get rid of the 'i' iterator.
Masami Hiramatsu <mhiramat@redhat.com> :
Tested on x86-64.
Performance impact of a tracepoint : same as markers, except that it
adds about 70 bytes of instructions in an unlikely branch of each
instrumented function (the for loop, the stack setup and the function
call). It currently adds a memory read, a test and a conditional branch
at the instrumentation site (in the hot path). Immediate values will
eventually change this into a load immediate, test and branch, which
removes the memory read which will make the i-cache impact smaller
(changing the memory read for a load immediate removes 3-4 bytes per
site on x86_32 (depending on mov prefixes), or 7-8 bytes on x86_64, it
also saves the d-cache hit).
About the performance impact of tracepoints (which is comparable to
markers), even without immediate values optimizations, tests done by
Hideo Aoki on ia64 show no regression. His test case was using hackbench
on a kernel where scheduler instrumentation (about 5 events in code
scheduler code) was added.
Quoting Hideo Aoki about Markers :
I evaluated overhead of kernel marker using linux-2.6-sched-fixes git
tree, which includes several markers for LTTng, using an ia64 server.
While the immediate trace mark feature isn't implemented on ia64, there
is no major performance regression. So, I think that we don't have any
issues to propose merging marker point patches into Linus's tree from
the viewpoint of performance impact.
I prepared two kernels to evaluate. The first one was compiled without
CONFIG_MARKERS. The second one was enabled CONFIG_MARKERS.
I downloaded the original hackbench from the following URL:
http://devresources.linux-foundation.org/craiger/hackbench/src/hackbench.c
I ran hackbench 5 times in each condition and calculated the average and
difference between the kernels.
The parameter of hackbench: every 50 from 50 to 800
The number of CPUs of the server: 2, 4, and 8
Below is the results. As you can see, major performance regression
wasn't found in any case. Even if number of processes increases,
differences between marker-enabled kernel and marker- disabled kernel
doesn't increase. Moreover, if number of CPUs increases, the differences
doesn't increase either.
Curiously, marker-enabled kernel is better than marker-disabled kernel
in more than half cases, although I guess it comes from the difference
of memory access pattern.
* 2 CPUs
Number of | without | with | diff | diff |
processes | Marker [Sec] | Marker [Sec] | [Sec] | [%] |
--------------------------------------------------------------
50 | 4.811 | 4.872 | +0.061 | +1.27 |
100 | 9.854 | 10.309 | +0.454 | +4.61 |
150 | 15.602 | 15.040 | -0.562 | -3.6 |
200 | 20.489 | 20.380 | -0.109 | -0.53 |
250 | 25.798 | 25.652 | -0.146 | -0.56 |
300 | 31.260 | 30.797 | -0.463 | -1.48 |
350 | 36.121 | 35.770 | -0.351 | -0.97 |
400 | 42.288 | 42.102 | -0.186 | -0.44 |
450 | 47.778 | 47.253 | -0.526 | -1.1 |
500 | 51.953 | 52.278 | +0.325 | +0.63 |
550 | 58.401 | 57.700 | -0.701 | -1.2 |
600 | 63.334 | 63.222 | -0.112 | -0.18 |
650 | 68.816 | 68.511 | -0.306 | -0.44 |
700 | 74.667 | 74.088 | -0.579 | -0.78 |
750 | 78.612 | 79.582 | +0.970 | +1.23 |
800 | 85.431 | 85.263 | -0.168 | -0.2 |
--------------------------------------------------------------
* 4 CPUs
Number of | without | with | diff | diff |
processes | Marker [Sec] | Marker [Sec] | [Sec] | [%] |
--------------------------------------------------------------
50 | 2.586 | 2.584 | -0.003 | -0.1 |
100 | 5.254 | 5.283 | +0.030 | +0.56 |
150 | 8.012 | 8.074 | +0.061 | +0.76 |
200 | 11.172 | 11.000 | -0.172 | -1.54 |
250 | 13.917 | 14.036 | +0.119 | +0.86 |
300 | 16.905 | 16.543 | -0.362 | -2.14 |
350 | 19.901 | 20.036 | +0.135 | +0.68 |
400 | 22.908 | 23.094 | +0.186 | +0.81 |
450 | 26.273 | 26.101 | -0.172 | -0.66 |
500 | 29.554 | 29.092 | -0.461 | -1.56 |
550 | 32.377 | 32.274 | -0.103 | -0.32 |
600 | 35.855 | 35.322 | -0.533 | -1.49 |
650 | 39.192 | 38.388 | -0.804 | -2.05 |
700 | 41.744 | 41.719 | -0.025 | -0.06 |
750 | 45.016 | 44.496 | -0.520 | -1.16 |
800 | 48.212 | 47.603 | -0.609 | -1.26 |
--------------------------------------------------------------
* 8 CPUs
Number of | without | with | diff | diff |
processes | Marker [Sec] | Marker [Sec] | [Sec] | [%] |
--------------------------------------------------------------
50 | 2.094 | 2.072 | -0.022 | -1.07 |
100 | 4.162 | 4.273 | +0.111 | +2.66 |
150 | 6.485 | 6.540 | +0.055 | +0.84 |
200 | 8.556 | 8.478 | -0.078 | -0.91 |
250 | 10.458 | 10.258 | -0.200 | -1.91 |
300 | 12.425 | 12.750 | +0.325 | +2.62 |
350 | 14.807 | 14.839 | +0.032 | +0.22 |
400 | 16.801 | 16.959 | +0.158 | +0.94 |
450 | 19.478 | 19.009 | -0.470 | -2.41 |
500 | 21.296 | 21.504 | +0.208 | +0.98 |
550 | 23.842 | 23.979 | +0.137 | +0.57 |
600 | 26.309 | 26.111 | -0.198 | -0.75 |
650 | 28.705 | 28.446 | -0.259 | -0.9 |
700 | 31.233 | 31.394 | +0.161 | +0.52 |
750 | 34.064 | 33.720 | -0.344 | -1.01 |
800 | 36.320 | 36.114 | -0.206 | -0.57 |
--------------------------------------------------------------
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Acked-by: 'Peter Zijlstra' <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-18 10:16:16 -06:00
|
|
|
obj-$(CONFIG_TRACEPOINTS) += tracepoint.o
|
2008-01-25 13:08:34 -07:00
|
|
|
obj-$(CONFIG_LATENCYTOP) += latencytop.o
|
2008-06-29 04:18:46 -06:00
|
|
|
obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o
|
2008-05-12 13:20:42 -06:00
|
|
|
obj-$(CONFIG_FTRACE) += trace/
|
2008-05-12 13:20:42 -06:00
|
|
|
obj-$(CONFIG_TRACING) += trace/
|
2008-05-12 13:21:01 -06:00
|
|
|
obj-$(CONFIG_SMP) += sched_cpupri.o
|
2005-04-16 16:20:36 -06:00
|
|
|
|
2005-05-05 17:15:11 -06:00
|
|
|
ifneq ($(CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER),y)
|
2005-04-16 16:20:36 -06:00
|
|
|
# According to Alan Modra <alan@linuxcare.com.au>, the -fno-omit-frame-pointer is
|
|
|
|
# needed for x86 only. Why this used to be enabled for all architectures is beyond
|
|
|
|
# me. I suspect most platforms don't need this, but until we know that for sure
|
|
|
|
# I turn this off for IA-64 only. Andreas Schwab says it's also needed on m68k
|
|
|
|
# to get a correct value for the wait-channel (WCHAN in ps). --davidm
|
|
|
|
CFLAGS_sched.o := $(PROFILING) -fno-omit-frame-pointer
|
|
|
|
endif
|
|
|
|
|
|
|
|
$(obj)/configs.o: $(obj)/config_data.h
|
|
|
|
|
|
|
|
# config_data.h contains the same information as ikconfig.h but gzipped.
|
|
|
|
# Info from config_data can be extracted from /proc/config*
|
|
|
|
targets += config_data.gz
|
|
|
|
$(obj)/config_data.gz: .config FORCE
|
|
|
|
$(call if_changed,gzip)
|
|
|
|
|
|
|
|
quiet_cmd_ikconfiggz = IKCFG $@
|
|
|
|
cmd_ikconfiggz = (echo "static const char kernel_config_data[] = MAGIC_START"; cat $< | scripts/bin2c; echo "MAGIC_END;") > $@
|
|
|
|
targets += config_data.h
|
|
|
|
$(obj)/config_data.h: $(obj)/config_data.gz FORCE
|
|
|
|
$(call if_changed,ikconfiggz)
|
avoid overflows in kernel/time.c
When the conversion factor between jiffies and milli- or microseconds is
not a single multiply or divide, as for the case of HZ == 300, we currently
do a multiply followed by a divide. The intervening result, however, is
subject to overflows, especially since the fraction is not simplified (for
HZ == 300, we multiply by 300 and divide by 1000).
This is exposed to the user when passing a large timeout to poll(), for
example.
This patch replaces the multiply-divide with a reciprocal multiplication on
32-bit platforms. When the input is an unsigned long, there is no portable
way to do this on 64-bit platforms there is no portable way to do this
since it requires a 128-bit intermediate result (which gcc does support on
64-bit platforms but may generate libgcc calls, e.g. on 64-bit s390), but
since the output is a 32-bit integer in the cases affected, just simplify
the multiply-divide (*3/10 instead of *300/1000).
The reciprocal multiply used can have off-by-one errors in the upper half
of the valid output range. This could be avoided at the expense of having
to deal with a potential 65-bit intermediate result. Since the intent is
to avoid overflow problems and most of the other time conversions are only
semiexact, the off-by-one errors were considered an acceptable tradeoff.
At Ralf Baechle's suggestion, this version uses a Perl script to compute
the necessary constants. We already have dependencies on Perl for kernel
compiles. This does, however, require the Perl module Math::BigInt, which
is included in the standard Perl distribution starting with version 5.8.0.
In order to support older versions of Perl, include a table of canned
constants in the script itself, and structure the script so that
Math::BigInt isn't required if pulling values from said table.
Running the script requires that the HZ value is available from the
Makefile. Thus, this patch also adds the Kconfig variable CONFIG_HZ to the
architectures which didn't already have it (alpha, cris, frv, h8300, m32r,
m68k, m68knommu, sparc, v850, and xtensa.) It does *not* touch the sh or
sh64 architectures, since Paul Mundt has dealt with those separately in the
sh tree.
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Cc: Ralf Baechle <ralf@linux-mips.org>,
Cc: Sam Ravnborg <sam@ravnborg.org>,
Cc: Paul Mundt <lethal@linux-sh.org>,
Cc: Richard Henderson <rth@twiddle.net>,
Cc: Michael Starvik <starvik@axis.com>,
Cc: David Howells <dhowells@redhat.com>,
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>,
Cc: Hirokazu Takata <takata@linux-m32r.org>,
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
Cc: Roman Zippel <zippel@linux-m68k.org>,
Cc: William L. Irwin <sparclinux@vger.kernel.org>,
Cc: Chris Zankel <chris@zankel.net>,
Cc: H. Peter Anvin <hpa@zytor.com>,
Cc: Jan Engelhardt <jengelh@computergmbh.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-08 05:21:26 -07:00
|
|
|
|
|
|
|
$(obj)/time.o: $(obj)/timeconst.h
|
|
|
|
|
|
|
|
quiet_cmd_timeconst = TIMEC $@
|
|
|
|
cmd_timeconst = $(PERL) $< $(CONFIG_HZ) > $@
|
|
|
|
targets += timeconst.h
|
|
|
|
$(obj)/timeconst.h: $(src)/timeconst.pl FORCE
|
|
|
|
$(call if_changed,timeconst)
|