On many platforms (including pSeries), smp_ops->message_pass is always
smp_muxed_ipi_message_pass. This changes arch/powerpc/kernel/smp.c so
that if smp_ops->message_pass is NULL, it calls smp_muxed_ipi_message_pass
directly.
This means that a platform doesn't need to set both .message_pass and
.cause_ipi, only one of them. It is a slight performance improvement
in that it gets rid of an indirect function call at the expense of a
predictable conditional branch.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Create a dummy irq_host using the generic dummy irq chip for the secondary
cpus to use. Create a direct irq mapping for the ipi and register the
ipi action handler against it. If for some unlikely reason part of this
fails then don't detect the secondary cpus.
This removes another instance of NO_IRQ_IGNORE, records the ipi stats
for the secondary cpus, and runs the ipi on the interrupt stack.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Compile the new smp ipi mux and demux code only if a platform
will make use of it. The new config is selected as required.
The new cause_ipi smp op is only available conditionally to point out
configs where the select is required; this makes setting the op an
immediate fail instead of a deferred unresolved symbol at link.
This also creates a new config for power surge powermac upgrade support
that can be disabled in expert mode but is default on.
I also removed the depends / default y on CONFIG_XICS since it is selected
by PSERIES.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Consolidate the mux and demux of ipi messages into smp.c and call
a new smp_ops callback to actually trigger the ipi.
The powerpc architecture code is optimised for having 4 distinct
ipi triggers, which are mapped to 4 distinct messages (ipi many, ipi
single, scheduler ipi, and enter debugger). However, several interrupt
controllers only provide a single software triggered interrupt that
can be delivered to each cpu. To resolve this limitation, each smp_ops
implementation created a per-cpu variable that is manipulated with atomic
bitops. Since these lines will be contended they are optimialy marked as
shared_aligned and take a full cache line for each cpu. Distro kernels
may have 2 or 3 of these in their config, each taking per-cpu space
even though at most one will be in use.
This consolidation removes smp_message_recv and replaces the single call
actions cases with direct calls from the common message recognition loop.
The complicated debugger ipi case with its muxed crash handling code is
moved to debug_ipi_action which is now called from the demux code (instead
of the multi-message action calling smp_message_recv).
I put a call to reschedule_action to increase the likelyhood of correctly
merging the anticipated scheduler_ipi() hook coming from the scheduler
tree; that single required call can be inlined later.
The actual message decode is a copy of the old pseries xics code with its
memory barriers and cache line spacing, augmented with a per-cpu unsigned
long based on the book-e doorbell code. The optional data is set via a
callback from the implementation and is passed to the new cause-ipi hook
along with the logical cpu number. While currently only the doorbell
implemntation uses this data it should be almost zero cost to retrieve and
pass it -- it adds a single register load for the argument from the same
cache line to which we just completed a store and the register is dead
on return from the call. I extended the data element from unsigned int
to unsigned long in case some other code wanted to associate a pointer.
The doorbell check_self is replaced by a call to smp_muxed_ipi_resend,
conditioned on the CPU_DBELL feature. The ifdef guard could be relaxed
to CONFIG_SMP but I left it with BOOKE for now.
Also, the doorbell interrupt vector for book-e was not calling irq_enter
and irq_exit, which throws off cpu accounting and causes code to not
realize it is running in interrupt context. Add the missing calls.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Now that smp_ops->smp_message_pass is always called with an (online) cpu
number for the target remove the checks for MSG_ALL and MSG_ALL_BUT_SELF.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
When we start a cpu we use smp_ops->kick_cpu(), which currently
returns void, it should be able to fail. Convert it to return
int, and update all uses.
Convert all the current error cases to return -ENOENT, which is
what would eventually be returned by __cpu_up() currently when
it doesn't detect the cpu as coming up in time.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Instead, keep it static, expose an accessor and use that from
the PowerMac code. Avoids easy namespace collisions and will
make it easier to consolidate with other implementations.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
On some machines that use i2c to synchronize the timebases (such
as PowerMac7,2/7,3 G5 machines), hotplug CPU would crash when
putting back a new CPU online due to the underlying i2c bus being
closed.
This uses the newly added bringup_done() callback to move the close
along with other housekeeping calls, and adds a CPU notifier to
re-open the i2c bus around subsequent hotplug operations
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The current code soft-disables, and then goes to NAP mode which
turns interrupts on. That means that if an interrupt occurs, we
will hit the masked interrupt code path which isn't what we want,
as it will return with EE off, which will either get us out of
NAP mode, or fail to enter it (according to spec).
Instead, let's just rely on the fact that it is safe to take
decrementer interrupts on an offline CPU and leave interrupts
enabled. We can also get rid of the special case in asm for
power4_cpu_offline_powersave() and just use power4_idle().
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Those instructions do nothing on non-threaded processors such
as 970's used on those machines.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Use the generic code, just add the MPIC priority setting,
I don't see any use in mucking around with the decrementer,
as 32-bit will have EE off all along, and 64-bit will be able
to deal with it.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Use generic cpu_state, call idle_task_exit() properly, and
remove smp_core99_cpu_die() which isn't useful, the generic
function does the job just fine.
Remove the last remnants of cpu_enable(), everybody uses the normal
__cpu_up() path now
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Configuring a powerpc 32 bit kernel for both SMP and SUSPEND turns on
CPU_HOTPLUG to enable disable_nonboot_cpus to be called by the common
suspend code. Previously the definition of cpu_die for ppc32 was in
the powermac platform code, causing it to be undefined if that platform
as not selected.
arch/powerpc/kernel/built-in.o: In function 'cpu_idle':
arch/powerpc/kernel/idle.c:98: undefined reference to 'cpu_die'
Move the code from setup_64 to smp.c and rename the power mac
versions to their specific names.
Note that this does not setup the cpu_die pointers in either
smp_ops (request a given cpu die) or ppc_md (make this cpu die),
for other platforms but there are generic versions in smp.c.
Reported-by: Matt Sealey <matt@genesi-usa.com>
Reported-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Anton Vorontsov <avorontsov@mvista.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Since the *_map cpumask variants are deprecated, change the comments to
instead refer to *_mask.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
machine is compatible is an OF-specific call. It should have
the of_ prefix to protect the global namespace.
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
Acked-by: Michal Simek <monstr@monstr.eu>
Use the accessors rather than frobbing bits directly (the new versions
are const).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Mike Travis <travis@sgi.com>
The code for setting up the IPIs for SMP PowerSurge marchines bitrot,
it needs to properly map the HW interrupt number
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The old PowerSurge SMP (ie, dual or quad 604 machines) code has
numerous issues in modern world.
One is cpu_possible_map is set too late (the device-tree is bogus)
so we fail to allocate the interrupt stacks and crash. Another
problem is the fact the timebase is frozen by the bringup of the
second CPU so the delays in the generic code will hang, we need
to move some of the calling procedure to inside the powermac code.
This makes it boot again for me
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Impact: cleanup
It's unused, since about 1995. So remove all initialization of it in
preparation for actually removing the field.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
The PowerMac kernel occasionally fails to bring up the secondary CPUs on
SMP, the trigger factor seem to be fairly random and related to location
of code and data.
This appears to be due to the initial loading of the TOC value by the
secondary processor which now happens before we clear HID4:RM_CI (Real
Mode Cache Invalidate). This bit should really be cleared before we do
any load or store other than fetching code.
This fix works based on the assumption that all SMP 64-bit PowerMacs use
variants of the 970, which fortunately is true, by explicitely clearing
that bit, adding an slbia for good measure as RM_CI mode is known to
create bogus ERAT entries.
I also removed some spurrious debug output that was left enabled by
mistake while at it.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The hard_smp_processor_id functions are the appropriate interfaces for
managing physical CPU ids.
Signed-off-by: Nathan Lynch <ntl@pobox.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as
a position-independent executable (PIE) when it is set. This involves
processing the dynamic relocations in the image in the early stages of
booting, even if the kernel is being run at the address it is linked at,
since the linker does not necessarily fill in words in the image for
which there are dynamic relocations. (In fact the linker does fill in
such words for 64-bit executables, though not for 32-bit executables,
so in principle we could avoid calling relocate() entirely when we're
running a 64-bit kernel at the linked address.)
The dynamic relocations are processed by a new function relocate(addr),
where the addr parameter is the virtual address where the image will be
run. In fact we call it twice; once before calling prom_init, and again
when starting the main kernel. This means that reloc_offset() returns
0 in prom_init (since it has been relocated to the address it is running
at), which necessitated a few adjustments.
This also changes __va and __pa to use an equivalent definition that is
simpler. With the relocatable kernel, PAGE_OFFSET and MEMORY_START are
constants (for 64-bit) whereas PHYSICAL_START is a variable (and
KERNELBASE ideally should be too, but isn't yet).
With this, relocatable kernels still copy themselves down to physical
address 0 and run there.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Currently create_branch() creates a branch instruction for you, and
patches it into the call site. In some circumstances it would be nice
to be able to create the instruction and patch it later, and also some
code might want to check for errors in the branch creation before
doing the patching. A future commit will change create_branch() to
check for errors.
For callers that don't care, replace create_branch() with
patch_branch(), which just creates the branch and patches it directly.
While we're touching all the callers, change to using unsigned int *,
as this seems to match usage better. That allows (and requires) us to
remove the volatile in the definition of vector in powermac/smp.c and
mpc86xx_smp.c, that's correct because now that we're passing vector as
an unsigned int * the compiler knows that it's value might change
across the patch_branch() call.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: Jon Loeliger <jdl@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
We currently have a few routines for patching code in asm/system.h, because
they didn't fit anywhere else. I'd like to clean them up a little and add
some more, so first move them into a dedicated C file - they don't need to
be inlined.
While we're moving the code, drop create_function_call(), it's intended
caller never got merged and will be replaced in future with something
different.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The per-cpu area(a) for the secondary CPU(s) isn't getting allocated
on old SMP powermacs that don't have the secondary CPU(s) listed in
the device tree, as per-cpu areas are now only allocated for CPUs in
the cpu_possible_map, and we aren't setting the bits for the secondary
CPU(s) until smp_prepare_cpus(), which is after per-cpu allocation.
Therefore this sets the bits for CPUs 1..3 in cpu_possible_map in
pmac_setup_arch, so they get per-cpu data allocated.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Remove includes of <linux/smp_lock.h> where it is not used/needed.
Suggested by Al Viro.
Builds cleanly on x86_64, i386, alpha, ia64, powerpc, sparc,
sparc64, and arm (all 59 defconfigs).
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
for consistency with other Open Firmware interfaces (and Sparc).
This is just a straight replacement.
This leaves the compatibility define in place.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This allows "hotplugging" of CPUs on G5 machines. CPUs that are
disabled are put into an idle loop with the decrementer frequency set
to minimum. To wake them up again we kick them just like when bringing
them up. To stop those CPUs from messing with any global state we stop
them from entering the timer interrupt.
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Replace uses with of_find_node_by_name and for_each_node_by_name.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
On the old "powersurge" SMP powermacs, the second CPU is started up
by sending it an IPI, which has the side effect of stopping the
timebase clock (so the secondary CPU's timebase can be synchronized
with the primary's). The routine that did this used udelay, which
will hang forever when the timebase is stopped, since udelay now spins
until the timebase reaches a certain value.
The end result is that the kernel would hang when bringing up the
second CPU. This fixes it by using a simple loop which just does a
fixed number of iterations to generate the delay. These old systems
were all clocked at around 200 MHz or so, so a fixed number of
iterations is acceptable.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Remove struct pt_regs * from all handlers.
Also remove the regs argument from get_irq() functions.
Compile tested with arch/powerpc/config/* and
arch/ppc/configs/prep_defconfig
Signed-off-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Now that get_property() returns a void *, there's no need to cast its
return value. Also, treat the return value as const, so we can
constify get_property later.
powermac platform & macintosh driver changes.
Built for pmac32_defconfig, g5_defconfig
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Use the new IRQF_ constants and remove the SA_INTERRUPT define
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When we stop allocating percpu memory for not-possible CPUs we must not touch
the percpu data for not-possible CPUs at all. The correct way of doing this
is to test cpu_possible() or to use for_each_cpu().
This patch is a kernel-wide sweep of all instances of NR_CPUS. I found very
few instances of this bug, if any. But the patch converts lots of open-coded
test to use the preferred helper macros.
Cc: Mikael Starvik <starvik@axis.com>
Cc: David Howells <dhowells@redhat.com>
Acked-by: Kyle McMartin <kyle@parisc-linux.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: Andi Kleen <ak@muc.de>
Cc: Christian Zankel <chris@zankel.net>
Cc: Philippe Elie <phil.el@wanadoo.fr>
Cc: Nathan Scott <nathans@sgi.com>
Cc: Jens Axboe <axboe@suse.de>
Cc: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The variable `timebase' used to transfer the current timebase value
from one cpu to the other in smp_core99_give/take_timebase was only
an unsigned long, i.e. 32 bits on 32-bit machines. It needs to be
64 bits. This makes it a u64, and fixes the issue reported by Kyle
Moffett, that the two cpus see wildly different values for the time
of day.
Signed-off-by: Paul Mackerras <paulus@samba.org>
This is the platform function interpreter itself along with the backends
for UniN/U3/U4, mac-io, GPIOs and i2c. It adds the ability to execute
those do-platform-* scripts in the device-tree (at least for most
devices for which a backend is provided). This should replace the clock
spreading hacks properly. It might also have an impact on all sort of
machines since some of the scripts marked "at init" will now be executed
on boot (or some other on sleep/wakeup), those will possibly do things
that the kernel didn't do at all, like setting some values into some i2c
devices (changing thermal sensor calibration or conversion rate) etc...
Thus regression testing is MUCH welcome. Also loook for errors in dmesg.
That's also why I've left rather verbose debugging enabled in this
version of the patch.
(I do expect some Windtunnel G4s to show some errors as they have an i2c
clock chip on the PMU bus that uses some primitives that the i2c backend
doesn't implement yet. I really need users that have one of those
machine to come back to me so we can get that done right, though the
errors themselves should be harmless, I suspect the machine might not
run at full speed).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This is the first part of a rework of the PowerMac i2c code. It
completely reworks the "low_i2c" layer. It is now more flexible,
supports KeyWest, SMU and PMU i2c busses, and provides functions to
match device nodes to i2c busses and adapters.
This patch also extends & fix some bugs in the SMU driver related to i2c
support and removes the clock spreading hacks from the pmac feature code
rather than adapting them to the new API since they'll be replaced by
the platform function code completely in patch 3/5
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This adds some very basic support for the new machines, including the
Quad G5 (tested), and other new dual core based machines and iMac G5
iSight (untested). This is still experimental ! There is no thermal
control yet, there is no proper handing of MSIs, etc.. but it
boots, I have all 4 cores up on my machine. Compared to the previous
version of this patch, this one adds DART IOMMU support for the U4
chipset and thus should work fine on setups with more than 2Gb of RAM.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
There's a few places where we need to fix things up for the kernel to work
if it's linked at 32MB:
- platforms/powermac/smp.c
To start secondary cpus on pmac we patch the reset vector, which is fine.
Except if we're above 32MB we don't have enough bits for an absolute branch,
it needs to relative.
- kernel/head_64.s
- A few branches in the cpu hold code need to load the full target address
and do a bctr.
- after_prom_start needs to load PHYSICAL_START as the dest address, not 0.
- The exception prolog needs to load the low word of the target adddress,
not just the low halfword.
- Fixup handling of the initial stab address.
- kernel/setup_64.c
smp_release_cpus() needs to write 1 to the spinloop flag near 0, not 32 MB.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
We were using udelay in the loop on the primary cpu waiting for the
secondary cpu to take the timebase value. Unfortunately now that
udelay uses the timebase, and the timebase is stopped at this point,
the udelay never terminated. This fixes it by not using udelay, and
increases the number of loops before we time out to compensate.
Signed-off-by: Paul Mackerras <paulus@samba.org>