Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
	drivers/net/usb/r8152.c
	drivers/net/xen-netback/netback.c

Both the r8152 and netback conflicts were simple overlapping
changes.

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2014-03-14 22:31:55 -04:00
commit 85dcce7a73
261 changed files with 2205 additions and 1640 deletions

View file

@ -124,12 +124,11 @@ the default being 204800 sectors (or 100MB).
Updating on-disk metadata
-------------------------
On-disk metadata is committed every time a REQ_SYNC or REQ_FUA bio is
written. If no such requests are made then commits will occur every
second. This means the cache behaves like a physical disk that has a
write cache (the same is true of the thin-provisioning target). If
power is lost you may lose some recent writes. The metadata should
always be consistent in spite of any crash.
On-disk metadata is committed every time a FLUSH or FUA bio is written.
If no such requests are made then commits will occur every second. This
means the cache behaves like a physical disk that has a volatile write
cache. If power is lost you may lose some recent writes. The metadata
should always be consistent in spite of any crash.
The 'dirty' state for a cache block changes far too frequently for us
to keep updating it on the fly. So we treat it as a hint. In normal

View file

@ -116,6 +116,35 @@ Resuming a device with a new table itself triggers an event so the
userspace daemon can use this to detect a situation where a new table
already exceeds the threshold.
A low water mark for the metadata device is maintained in the kernel and
will trigger a dm event if free space on the metadata device drops below
it.
Updating on-disk metadata
-------------------------
On-disk metadata is committed every time a FLUSH or FUA bio is written.
If no such requests are made then commits will occur every second. This
means the thin-provisioning target behaves like a physical disk that has
a volatile write cache. If power is lost you may lose some recent
writes. The metadata should always be consistent in spite of any crash.
If data space is exhausted the pool will either error or queue IO
according to the configuration (see: error_if_no_space). If metadata
space is exhausted or a metadata operation fails: the pool will error IO
until the pool is taken offline and repair is performed to 1) fix any
potential inconsistencies and 2) clear the flag that imposes repair.
Once the pool's metadata device is repaired it may be resized, which
will allow the pool to return to normal operation. Note that if a pool
is flagged as needing repair, the pool's data and metadata devices
cannot be resized until repair is performed. It should also be noted
that when the pool's metadata space is exhausted the current metadata
transaction is aborted. Given that the pool will cache IO whose
completion may have already been acknowledged to upper IO layers
(e.g. filesystem) it is strongly suggested that consistency checks
(e.g. fsck) be performed on those layers when repair of the pool is
required.
Thin provisioning
-----------------
@ -258,10 +287,9 @@ ii) Status
should register for the event and then check the target's status.
held metadata root:
The location, in sectors, of the metadata root that has been
The location, in blocks, of the metadata root that has been
'held' for userspace read access. '-' indicates there is no
held root. This feature is not yet implemented so '-' is
always returned.
held root.
discard_passdown|no_discard_passdown
Whether or not discards are actually being passed down to the

View file

@ -1,4 +1,4 @@
Broadcom Capri Pin Controller
Broadcom BCM281xx Pin Controller
This is a pin controller for the Broadcom BCM281xx SoC family, which includes
BCM11130, BCM11140, BCM11351, BCM28145, and BCM28155 SoCs.
@ -7,14 +7,14 @@ BCM11130, BCM11140, BCM11351, BCM28145, and BCM28155 SoCs.
Required Properties:
- compatible: Must be "brcm,capri-pinctrl".
- compatible: Must be "brcm,bcm11351-pinctrl"
- reg: Base address of the PAD Controller register block and the size
of the block.
For example, the following is the bare minimum node:
pinctrl@35004800 {
compatible = "brcm,capri-pinctrl";
compatible = "brcm,bcm11351-pinctrl";
reg = <0x35004800 0x430>;
};
@ -119,7 +119,7 @@ Optional Properties (for HDMI pins):
Example:
// pin controller node
pinctrl@35004800 {
compatible = "brcm,capri-pinctrl";
compatible = "brcmbcm11351-pinctrl";
reg = <0x35004800 0x430>;
// pin configuration node

View file

@ -453,7 +453,7 @@ TP_STATUS_COPY : This flag indicates that the frame (and associated
enabled previously with setsockopt() and
the PACKET_COPY_THRESH option.
The number of frames than can be buffered to
The number of frames that can be buffered to
be read with recvfrom is limited like a normal socket.
See the SO_RCVBUF option in the socket (7) man page.

View file

@ -21,26 +21,38 @@ has such a feature).
SO_TIMESTAMPING:
Instructs the socket layer which kind of information is wanted. The
parameter is an integer with some of the following bits set. Setting
other bits is an error and doesn't change the current state.
Instructs the socket layer which kind of information should be collected
and/or reported. The parameter is an integer with some of the following
bits set. Setting other bits is an error and doesn't change the current
state.
SOF_TIMESTAMPING_TX_HARDWARE: try to obtain send time stamp in hardware
SOF_TIMESTAMPING_TX_SOFTWARE: if SOF_TIMESTAMPING_TX_HARDWARE is off or
fails, then do it in software
SOF_TIMESTAMPING_RX_HARDWARE: return the original, unmodified time stamp
as generated by the hardware
SOF_TIMESTAMPING_RX_SOFTWARE: if SOF_TIMESTAMPING_RX_HARDWARE is off or
fails, then do it in software
SOF_TIMESTAMPING_RAW_HARDWARE: return original raw hardware time stamp
SOF_TIMESTAMPING_SYS_HARDWARE: return hardware time stamp transformed to
the system time base
SOF_TIMESTAMPING_SOFTWARE: return system time stamp generated in
software
Four of the bits are requests to the stack to try to generate
timestamps. Any combination of them is valid.
SOF_TIMESTAMPING_TX/RX determine how time stamps are generated.
SOF_TIMESTAMPING_RAW/SYS determine how they are reported in the
following control message:
SOF_TIMESTAMPING_TX_HARDWARE: try to obtain send time stamps in hardware
SOF_TIMESTAMPING_TX_SOFTWARE: try to obtain send time stamps in software
SOF_TIMESTAMPING_RX_HARDWARE: try to obtain receive time stamps in hardware
SOF_TIMESTAMPING_RX_SOFTWARE: try to obtain receive time stamps in software
The other three bits control which timestamps will be reported in a
generated control message. If none of these bits are set or if none of
the set bits correspond to data that is available, then the control
message will not be generated:
SOF_TIMESTAMPING_SOFTWARE: report systime if available
SOF_TIMESTAMPING_SYS_HARDWARE: report hwtimetrans if available
SOF_TIMESTAMPING_RAW_HARDWARE: report hwtimeraw if available
It is worth noting that timestamps may be collected for reasons other
than being requested by a particular socket with
SOF_TIMESTAMPING_[TR]X_(HARD|SOFT)WARE. For example, most drivers that
can generate hardware receive timestamps ignore
SOF_TIMESTAMPING_RX_HARDWARE. It is still a good idea to set that flag
in case future drivers pay attention.
If timestamps are reported, they will appear in a control message with
cmsg_level==SOL_SOCKET, cmsg_type==SO_TIMESTAMPING, and a payload like
this:
struct scm_timestamping {
struct timespec systime;

View file

@ -474,7 +474,7 @@ F: net/rxrpc/af_rxrpc.c
AGPGART DRIVER
M: David Airlie <airlied@linux.ie>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6.git
T: git git://people.freedesktop.org/~airlied/linux (part of drm maint)
S: Maintained
F: drivers/char/agp/
F: include/linux/agp*
@ -1738,6 +1738,7 @@ F: include/uapi/linux/bfs_fs.h
BLACKFIN ARCHITECTURE
M: Steven Miao <realmz6@gmail.com>
L: adi-buildroot-devel@lists.sourceforge.net
T: git git://git.code.sf.net/p/adi-linux/code
W: http://blackfin.uclinux.org
S: Supported
F: arch/blackfin/
@ -6008,6 +6009,8 @@ F: include/linux/netdevice.h
F: include/uapi/linux/in.h
F: include/uapi/linux/net.h
F: include/uapi/linux/netdevice.h
F: tools/net/
F: tools/testing/selftests/net/
NETWORKING [IPv4/IPv6]
M: "David S. Miller" <davem@davemloft.net>
@ -6182,6 +6185,12 @@ S: Supported
F: drivers/block/nvme*
F: include/linux/nvme.h
NXP TDA998X DRM DRIVER
M: Russell King <rmk+kernel@arm.linux.org.uk>
S: Supported
F: drivers/gpu/drm/i2c/tda998x_drv.c
F: include/drm/i2c/tda998x.h
OMAP SUPPORT
M: Tony Lindgren <tony@atomide.com>
L: linux-omap@vger.kernel.org

View file

@ -1,7 +1,7 @@
VERSION = 3
PATCHLEVEL = 14
SUBLEVEL = 0
EXTRAVERSION = -rc5
EXTRAVERSION = -rc6
NAME = Shuffling Zombie Juror
# *DOCUMENTATION*

View file

@ -282,7 +282,7 @@ static inline void __cache_line_loop(unsigned long paddr, unsigned long vaddr,
#else
/* if V-P const for loop, PTAG can be written once outside loop */
if (full_page_op)
write_aux_reg(ARC_REG_DC_PTAG, paddr);
write_aux_reg(aux_tag, paddr);
#endif
while (num_lines-- > 0) {
@ -296,7 +296,7 @@ static inline void __cache_line_loop(unsigned long paddr, unsigned long vaddr,
write_aux_reg(aux_cmd, vaddr);
vaddr += L1_CACHE_BYTES;
#else
write_aux_reg(aux, paddr);
write_aux_reg(aux_cmd, paddr);
paddr += L1_CACHE_BYTES;
#endif
}

View file

@ -1578,6 +1578,7 @@ config BL_SWITCHER_DUMMY_IF
choice
prompt "Memory split"
depends on MMU
default VMSPLIT_3G
help
Select the desired split between kernel and user memory.
@ -1595,6 +1596,7 @@ endchoice
config PAGE_OFFSET
hex
default PHYS_OFFSET if !MMU
default 0x40000000 if VMSPLIT_1G
default 0x80000000 if VMSPLIT_2G
default 0xC0000000
@ -1903,6 +1905,7 @@ config XEN
depends on ARM && AEABI && OF
depends on CPU_V7 && !CPU_V6
depends on !GENERIC_ATOMIC64
depends on MMU
select ARM_PSCI
select SWIOTLB_XEN
select ARCH_DMA_ADDR_T_64BIT

View file

@ -1,4 +1,5 @@
ashldi3.S
bswapsdi2.S
font.c
lib1funcs.S
hyp-stub.S

View file

@ -147,7 +147,7 @@
};
pinctrl@35004800 {
compatible = "brcm,capri-pinctrl";
compatible = "brcm,bcm11351-pinctrl";
reg = <0x35004800 0x430>;
};

View file

@ -13,7 +13,7 @@
/ {
model = "OMAP3 GTA04";
compatible = "ti,omap3-gta04", "ti,omap3";
compatible = "ti,omap3-gta04", "ti,omap36xx", "ti,omap3";
cpus {
cpu@0 {

View file

@ -14,7 +14,7 @@
/ {
model = "IGEPv2 (TI OMAP AM/DM37x)";
compatible = "isee,omap3-igep0020", "ti,omap3";
compatible = "isee,omap3-igep0020", "ti,omap36xx", "ti,omap3";
leds {
pinctrl-names = "default";

View file

@ -13,7 +13,7 @@
/ {
model = "IGEP COM MODULE (TI OMAP AM/DM37x)";
compatible = "isee,omap3-igep0030", "ti,omap3";
compatible = "isee,omap3-igep0030", "ti,omap36xx", "ti,omap3";
leds {
pinctrl-names = "default";

View file

@ -426,7 +426,7 @@
};
rtp: rtp@01c25000 {
compatible = "allwinner,sun4i-ts";
compatible = "allwinner,sun4i-a10-ts";
reg = <0x01c25000 0x100>;
interrupts = <29>;
};

View file

@ -383,7 +383,7 @@
};
rtp: rtp@01c25000 {
compatible = "allwinner,sun4i-ts";
compatible = "allwinner,sun4i-a10-ts";
reg = <0x01c25000 0x100>;
interrupts = <29>;
};

View file

@ -346,7 +346,7 @@
};
rtp: rtp@01c25000 {
compatible = "allwinner,sun4i-ts";
compatible = "allwinner,sun4i-a10-ts";
reg = <0x01c25000 0x100>;
interrupts = <29>;
};

View file

@ -454,7 +454,7 @@
rtc: rtc@01c20d00 {
compatible = "allwinner,sun7i-a20-rtc";
reg = <0x01c20d00 0x20>;
interrupts = <0 24 1>;
interrupts = <0 24 4>;
};
sid: eeprom@01c23800 {
@ -463,7 +463,7 @@
};
rtp: rtp@01c25000 {
compatible = "allwinner,sun4i-ts";
compatible = "allwinner,sun4i-a10-ts";
reg = <0x01c25000 0x100>;
interrupts = <0 29 4>;
};
@ -596,10 +596,10 @@
hstimer@01c60000 {
compatible = "allwinner,sun7i-a20-hstimer";
reg = <0x01c60000 0x1000>;
interrupts = <0 81 1>,
<0 82 1>,
<0 83 1>,
<0 84 1>;
interrupts = <0 81 4>,
<0 82 4>,
<0 83 4>,
<0 84 4>;
clocks = <&ahb_gates 28>;
};

View file

@ -204,7 +204,10 @@ CONFIG_MMC_BLOCK_MINORS=16
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_MMC_SDHCI_TEGRA=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
CONFIG_LEDS_GPIO=y
CONFIG_LEDS_TRIGGERS=y
CONFIG_LEDS_TRIGGER_TIMER=y
CONFIG_LEDS_TRIGGER_ONESHOT=y
CONFIG_LEDS_TRIGGER_HEARTBEAT=y

View file

@ -30,14 +30,15 @@
*/
#define UL(x) _AC(x, UL)
/* PAGE_OFFSET - the virtual address of the start of the kernel image */
#define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET)
#ifdef CONFIG_MMU
/*
* PAGE_OFFSET - the virtual address of the start of the kernel image
* TASK_SIZE - the maximum size of a user space task.
* TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area
*/
#define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET)
#define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M))
#define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M)
@ -104,10 +105,6 @@
#define END_MEM (UL(CONFIG_DRAM_BASE) + CONFIG_DRAM_SIZE)
#endif
#ifndef PAGE_OFFSET
#define PAGE_OFFSET PLAT_PHYS_OFFSET
#endif
/*
* The module can be at any place in ram in nommu mode.
*/

View file

@ -177,6 +177,18 @@ __lookup_processor_type_data:
.long __proc_info_end
.size __lookup_processor_type_data, . - __lookup_processor_type_data
__error_lpae:
#ifdef CONFIG_DEBUG_LL
adr r0, str_lpae
bl printascii
b __error
str_lpae: .asciz "\nError: Kernel with LPAE support, but CPU does not support LPAE.\n"
#else
b __error
#endif
.align
ENDPROC(__error_lpae)
__error_p:
#ifdef CONFIG_DEBUG_LL
adr r0, str_p1

View file

@ -102,7 +102,7 @@ ENTRY(stext)
and r3, r3, #0xf @ extract VMSA support
cmp r3, #5 @ long-descriptor translation table format?
THUMB( it lo ) @ force fixup-able long branch encoding
blo __error_p @ only classic page table format
blo __error_lpae @ only classic page table format
#endif
#ifndef CONFIG_XIP_KERNEL

View file

@ -433,7 +433,9 @@ static const struct clk_ops dpll4_m5x2_ck_ops = {
.enable = &omap2_dflt_clk_enable,
.disable = &omap2_dflt_clk_disable,
.is_enabled = &omap2_dflt_clk_is_enabled,
.set_rate = &omap3_clkoutx2_set_rate,
.recalc_rate = &omap3_clkoutx2_recalc,
.round_rate = &omap3_clkoutx2_round_rate,
};
static const struct clk_ops dpll4_m5x2_ck_3630_ops = {

View file

@ -23,6 +23,8 @@
#include "prm.h"
#include "clockdomain.h"
#define MAX_CPUS 2
/* Machine specific information */
struct idle_statedata {
u32 cpu_state;
@ -48,11 +50,11 @@ static struct idle_statedata omap4_idle_data[] = {
},
};
static struct powerdomain *mpu_pd, *cpu_pd[NR_CPUS];
static struct clockdomain *cpu_clkdm[NR_CPUS];
static struct powerdomain *mpu_pd, *cpu_pd[MAX_CPUS];
static struct clockdomain *cpu_clkdm[MAX_CPUS];
static atomic_t abort_barrier;
static bool cpu_done[NR_CPUS];
static bool cpu_done[MAX_CPUS];
static struct idle_statedata *state_ptr = &omap4_idle_data[0];
/* Private functions */

View file

@ -623,25 +623,12 @@ void omap3_dpll_deny_idle(struct clk_hw_omap *clk)
/* Clock control for DPLL outputs */
/**
* omap3_clkoutx2_recalc - recalculate DPLL X2 output virtual clock rate
* @clk: DPLL output struct clk
*
* Using parent clock DPLL data, look up DPLL state. If locked, set our
* rate to the dpll_clk * 2; otherwise, just use dpll_clk.
*/
unsigned long omap3_clkoutx2_recalc(struct clk_hw *hw,
unsigned long parent_rate)
/* Find the parent DPLL for the given clkoutx2 clock */
static struct clk_hw_omap *omap3_find_clkoutx2_dpll(struct clk_hw *hw)
{
const struct dpll_data *dd;
unsigned long rate;
u32 v;
struct clk_hw_omap *pclk = NULL;
struct clk *parent;
if (!parent_rate)
return 0;
/* Walk up the parents of clk, looking for a DPLL */
do {
do {
@ -656,9 +643,35 @@ unsigned long omap3_clkoutx2_recalc(struct clk_hw *hw,
/* clk does not have a DPLL as a parent? error in the clock data */
if (!pclk) {
WARN_ON(1);
return 0;
return NULL;
}
return pclk;
}
/**
* omap3_clkoutx2_recalc - recalculate DPLL X2 output virtual clock rate
* @clk: DPLL output struct clk
*
* Using parent clock DPLL data, look up DPLL state. If locked, set our
* rate to the dpll_clk * 2; otherwise, just use dpll_clk.
*/
unsigned long omap3_clkoutx2_recalc(struct clk_hw *hw,
unsigned long parent_rate)
{
const struct dpll_data *dd;
unsigned long rate;
u32 v;
struct clk_hw_omap *pclk = NULL;
if (!parent_rate)
return 0;
pclk = omap3_find_clkoutx2_dpll(hw);
if (!pclk)
return 0;
dd = pclk->dpll_data;
WARN_ON(!dd->enable_mask);
@ -672,6 +685,55 @@ unsigned long omap3_clkoutx2_recalc(struct clk_hw *hw,
return rate;
}
int omap3_clkoutx2_set_rate(struct clk_hw *hw, unsigned long rate,
unsigned long parent_rate)
{
return 0;
}
long omap3_clkoutx2_round_rate(struct clk_hw *hw, unsigned long rate,
unsigned long *prate)
{
const struct dpll_data *dd;
u32 v;
struct clk_hw_omap *pclk = NULL;
if (!*prate)
return 0;
pclk = omap3_find_clkoutx2_dpll(hw);
if (!pclk)
return 0;
dd = pclk->dpll_data;
/* TYPE J does not have a clkoutx2 */
if (dd->flags & DPLL_J_TYPE) {
*prate = __clk_round_rate(__clk_get_parent(pclk->hw.clk), rate);
return *prate;
}
WARN_ON(!dd->enable_mask);
v = omap2_clk_readl(pclk, dd->control_reg) & dd->enable_mask;
v >>= __ffs(dd->enable_mask);
/* If in bypass, the rate is fixed to the bypass rate*/
if (v != OMAP3XXX_EN_DPLL_LOCKED)
return *prate;
if (__clk_get_flags(hw->clk) & CLK_SET_RATE_PARENT) {
unsigned long best_parent;
best_parent = (rate / 2);
*prate = __clk_round_rate(__clk_get_parent(hw->clk),
best_parent);
}
return *prate * 2;
}
/* OMAP3/4 non-CORE DPLL clkops */
const struct clk_hw_omap_ops clkhwops_omap3_dpll = {
.allow_idle = omap3_dpll_allow_idle,

View file

@ -1947,29 +1947,31 @@ static int _ocp_softreset(struct omap_hwmod *oh)
goto dis_opt_clks;
_write_sysconfig(v, oh);
if (oh->class->sysc->srst_udelay)
udelay(oh->class->sysc->srst_udelay);
c = _wait_softreset_complete(oh);
if (c == MAX_MODULE_SOFTRESET_WAIT) {
pr_warning("omap_hwmod: %s: softreset failed (waited %d usec)\n",
oh->name, MAX_MODULE_SOFTRESET_WAIT);
ret = -ETIMEDOUT;
goto dis_opt_clks;
} else {
pr_debug("omap_hwmod: %s: softreset in %d usec\n", oh->name, c);
}
ret = _clear_softreset(oh, &v);
if (ret)
goto dis_opt_clks;
_write_sysconfig(v, oh);
if (oh->class->sysc->srst_udelay)
udelay(oh->class->sysc->srst_udelay);
c = _wait_softreset_complete(oh);
if (c == MAX_MODULE_SOFTRESET_WAIT)
pr_warning("omap_hwmod: %s: softreset failed (waited %d usec)\n",
oh->name, MAX_MODULE_SOFTRESET_WAIT);
else
pr_debug("omap_hwmod: %s: softreset in %d usec\n", oh->name, c);
/*
* XXX add _HWMOD_STATE_WEDGED for modules that don't come back from
* _wait_target_ready() or _reset()
*/
ret = (c == MAX_MODULE_SOFTRESET_WAIT) ? -ETIMEDOUT : 0;
dis_opt_clks:
if (oh->flags & HWMOD_CONTROL_OPT_CLKS_IN_RESET)
_disable_optional_clocks(oh);

View file

@ -1365,11 +1365,10 @@ static struct omap_hwmod_class_sysconfig dra7xx_spinlock_sysc = {
.rev_offs = 0x0000,
.sysc_offs = 0x0010,
.syss_offs = 0x0014,
.sysc_flags = (SYSC_HAS_AUTOIDLE | SYSC_HAS_CLOCKACTIVITY |
SYSC_HAS_ENAWAKEUP | SYSC_HAS_SIDLEMODE |
SYSC_HAS_SOFTRESET | SYSS_HAS_RESET_STATUS),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
SIDLE_SMART_WKUP),
.sysc_flags = (SYSC_HAS_AUTOIDLE | SYSC_HAS_ENAWAKEUP |
SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET |
SYSS_HAS_RESET_STATUS),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART),
.sysc_fields = &omap_hwmod_sysc_type1,
};

View file

@ -22,6 +22,8 @@
#include "common-board-devices.h"
#include "dss-common.h"
#include "control.h"
#include "omap-secure.h"
#include "soc.h"
struct pdata_init {
const char *compatible;
@ -169,6 +171,22 @@ static void __init am3517_evm_legacy_init(void)
omap_ctrl_writel(v, AM35XX_CONTROL_IP_SW_RESET);
omap_ctrl_readl(AM35XX_CONTROL_IP_SW_RESET); /* OCP barrier */
}
static void __init nokia_n900_legacy_init(void)
{
hsmmc2_internal_input_clk();
if (omap_type() == OMAP2_DEVICE_TYPE_SEC) {
if (IS_ENABLED(CONFIG_ARM_ERRATA_430973)) {
pr_info("RX-51: Enabling ARM errata 430973 workaround\n");
/* set IBE to 1 */
rx51_secure_update_aux_cr(BIT(6), 0);
} else {
pr_warning("RX-51: Not enabling ARM errata 430973 workaround\n");
pr_warning("Thumb binaries may crash randomly without this workaround\n");
}
}
}
#endif /* CONFIG_ARCH_OMAP3 */
#ifdef CONFIG_ARCH_OMAP4
@ -239,6 +257,7 @@ struct of_dev_auxdata omap_auxdata_lookup[] __initdata = {
#endif
#ifdef CONFIG_ARCH_OMAP3
OF_DEV_AUXDATA("ti,omap3-padconf", 0x48002030, "48002030.pinmux", &pcs_pdata),
OF_DEV_AUXDATA("ti,omap3-padconf", 0x480025a0, "480025a0.pinmux", &pcs_pdata),
OF_DEV_AUXDATA("ti,omap3-padconf", 0x48002a00, "48002a00.pinmux", &pcs_pdata),
/* Only on am3517 */
OF_DEV_AUXDATA("ti,davinci_mdio", 0x5c030000, "davinci_mdio.0", NULL),
@ -259,7 +278,7 @@ struct of_dev_auxdata omap_auxdata_lookup[] __initdata = {
static struct pdata_init pdata_quirks[] __initdata = {
#ifdef CONFIG_ARCH_OMAP3
{ "compulab,omap3-sbc-t3730", omap3_sbc_t3730_legacy_init, },
{ "nokia,omap3-n900", hsmmc2_internal_input_clk, },
{ "nokia,omap3-n900", nokia_n900_legacy_init, },
{ "nokia,omap3-n9", hsmmc2_internal_input_clk, },
{ "nokia,omap3-n950", hsmmc2_internal_input_clk, },
{ "isee,omap3-igep0020", omap3_igep0020_legacy_init, },

View file

@ -183,11 +183,11 @@ void omap4_prminst_global_warm_sw_reset(void)
OMAP4_PRM_RSTCTRL_OFFSET);
v |= OMAP4430_RST_GLOBAL_WARM_SW_MASK;
omap4_prminst_write_inst_reg(v, OMAP4430_PRM_PARTITION,
OMAP4430_PRM_DEVICE_INST,
dev_inst,
OMAP4_PRM_RSTCTRL_OFFSET);
/* OCP barrier */
v = omap4_prminst_read_inst_reg(OMAP4430_PRM_PARTITION,
OMAP4430_PRM_DEVICE_INST,
dev_inst,
OMAP4_PRM_RSTCTRL_OFFSET);
}

View file

@ -13,6 +13,8 @@
#ifndef __ASM_ARCH_COLLIE_H
#define __ASM_ARCH_COLLIE_H
#include "hardware.h" /* Gives GPIO_MAX */
extern void locomolcd_power(int on);
#define COLLIE_SCOOP_GPIO_BASE (GPIO_MAX + 1)

View file

@ -264,6 +264,9 @@ static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
note_page(st, addr, 3, pmd_val(*pmd));
else
walk_pte(st, pmd, addr);
if (SECTION_SIZE < PMD_SIZE && pmd_large(pmd[1]))
note_page(st, addr + SECTION_SIZE, 3, pmd_val(pmd[1]));
}
}

View file

@ -12,6 +12,7 @@
#define _ASM_C6X_CACHE_H
#include <linux/irqflags.h>
#include <linux/init.h>
/*
* Cache line size

View file

@ -144,7 +144,7 @@ static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
* definition, which doesn't have the same semantics. We don't want to
* use -fno-builtin, so just hide the name ffs.
*/
#define ffs kernel_ffs
#define ffs(x) kernel_ffs(x)
#include <asm-generic/bitops/fls.h>
#include <asm-generic/bitops/__fls.h>

View file

@ -98,7 +98,7 @@ static int uncached_add_chunk(struct uncached_pool *uc_pool, int nid)
/* attempt to allocate a granule's worth of cached memory pages */
page = alloc_pages_exact_node(nid,
GFP_KERNEL | __GFP_ZERO | GFP_THISNODE,
GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE,
IA64_GRANULE_SHIFT-PAGE_SHIFT);
if (!page) {
mutex_unlock(&uc_pool->add_chunk_mutex);

View file

@ -1048,6 +1048,15 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
flush_altivec_to_thread(src);
flush_vsx_to_thread(src);
flush_spe_to_thread(src);
/*
* Flush TM state out so we can copy it. __switch_to_tm() does this
* flush but it removes the checkpointed state from the current CPU and
* transitions the CPU out of TM mode. Hence we need to call
* tm_recheckpoint_new_task() (on the same task) to restore the
* checkpointed state back and the TM mode.
*/
__switch_to_tm(src);
tm_recheckpoint_new_task(src);
*dst = *src;

View file

@ -81,6 +81,7 @@ _GLOBAL(relocate)
6: blr
.balign 8
p_dyn: .llong __dynamic_start - 0b
p_rela: .llong __rela_dyn_start - 0b
p_st: .llong _stext - 0b

View file

@ -123,7 +123,8 @@ static int __init cbe_ptcal_enable_on_node(int nid, int order)
area->nid = nid;
area->order = order;
area->pages = alloc_pages_exact_node(area->nid, GFP_KERNEL|GFP_THISNODE,
area->pages = alloc_pages_exact_node(area->nid,
GFP_KERNEL|__GFP_THISNODE,
area->order);
if (!area->pages) {

View file

@ -341,10 +341,6 @@ config X86_USE_3DNOW
def_bool y
depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
config X86_OOSTORE
def_bool y
depends on (MWINCHIP3D || MWINCHIPC6) && MTRR
#
# P6_NOPs are a relatively minor optimization that require a family >=
# 6 processor, except that it is broken on certain VIA chips.

View file

@ -85,11 +85,7 @@
#else
# define smp_rmb() barrier()
#endif
#ifdef CONFIG_X86_OOSTORE
# define smp_wmb() wmb()
#else
# define smp_wmb() barrier()
#endif
#define smp_wmb() barrier()
#define smp_read_barrier_depends() read_barrier_depends()
#define set_mb(var, value) do { (void)xchg(&var, value); } while (0)
#else /* !SMP */
@ -100,7 +96,7 @@
#define set_mb(var, value) do { var = value; barrier(); } while (0)
#endif /* SMP */
#if defined(CONFIG_X86_OOSTORE) || defined(CONFIG_X86_PPRO_FENCE)
#if defined(CONFIG_X86_PPRO_FENCE)
/*
* For either of these options x86 doesn't have a strong TSO memory

View file

@ -134,6 +134,7 @@ extern void efi_setup_page_tables(void);
extern void __init old_map_region(efi_memory_desc_t *md);
extern void __init runtime_code_page_mkexec(void);
extern void __init efi_runtime_mkexec(void);
extern void __init efi_apply_memmap_quirks(void);
struct efi_setup_data {
u64 fw_vendor;

View file

@ -237,7 +237,7 @@ memcpy_toio(volatile void __iomem *dst, const void *src, size_t count)
static inline void flush_write_buffers(void)
{
#if defined(CONFIG_X86_OOSTORE) || defined(CONFIG_X86_PPRO_FENCE)
#if defined(CONFIG_X86_PPRO_FENCE)
asm volatile("lock; addl $0,0(%%esp)": : :"memory");
#endif
}

View file

@ -26,10 +26,9 @@
# define LOCK_PTR_REG "D"
#endif
#if defined(CONFIG_X86_32) && \
(defined(CONFIG_X86_OOSTORE) || defined(CONFIG_X86_PPRO_FENCE))
#if defined(CONFIG_X86_32) && (defined(CONFIG_X86_PPRO_FENCE))
/*
* On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock
* On PPro SMP, we use a locked operation to unlock
* (PPro errata 66, 92)
*/
# define UNLOCK_LOCK_PREFIX LOCK_PREFIX

View file

@ -8,236 +8,6 @@
#include "cpu.h"
#ifdef CONFIG_X86_OOSTORE
static u32 power2(u32 x)
{
u32 s = 1;
while (s <= x)
s <<= 1;
return s >>= 1;
}
/*
* Set up an actual MCR
*/
static void centaur_mcr_insert(int reg, u32 base, u32 size, int key)
{
u32 lo, hi;
hi = base & ~0xFFF;
lo = ~(size-1); /* Size is a power of 2 so this makes a mask */
lo &= ~0xFFF; /* Remove the ctrl value bits */
lo |= key; /* Attribute we wish to set */
wrmsr(reg+MSR_IDT_MCR0, lo, hi);
mtrr_centaur_report_mcr(reg, lo, hi); /* Tell the mtrr driver */
}
/*
* Figure what we can cover with MCR's
*
* Shortcut: We know you can't put 4Gig of RAM on a winchip
*/
static u32 ramtop(void)
{
u32 clip = 0xFFFFFFFFUL;
u32 top = 0;
int i;
for (i = 0; i < e820.nr_map; i++) {
unsigned long start, end;
if (e820.map[i].addr > 0xFFFFFFFFUL)
continue;
/*
* Don't MCR over reserved space. Ignore the ISA hole
* we frob around that catastrophe already
*/
if (e820.map[i].type == E820_RESERVED) {
if (e820.map[i].addr >= 0x100000UL &&
e820.map[i].addr < clip)
clip = e820.map[i].addr;
continue;
}
start = e820.map[i].addr;
end = e820.map[i].addr + e820.map[i].size;
if (start >= end)
continue;
if (end > top)
top = end;
}
/*
* Everything below 'top' should be RAM except for the ISA hole.
* Because of the limited MCR's we want to map NV/ACPI into our
* MCR range for gunk in RAM
*
* Clip might cause us to MCR insufficient RAM but that is an
* acceptable failure mode and should only bite obscure boxes with
* a VESA hole at 15Mb
*
* The second case Clip sometimes kicks in is when the EBDA is marked
* as reserved. Again we fail safe with reasonable results
*/
if (top > clip)
top = clip;
return top;
}
/*
* Compute a set of MCR's to give maximum coverage
*/
static int centaur_mcr_compute(int nr, int key)
{
u32 mem = ramtop();
u32 root = power2(mem);
u32 base = root;
u32 top = root;
u32 floor = 0;
int ct = 0;
while (ct < nr) {
u32 fspace = 0;
u32 high;
u32 low;
/*
* Find the largest block we will fill going upwards
*/
high = power2(mem-top);
/*
* Find the largest block we will fill going downwards
*/
low = base/2;
/*
* Don't fill below 1Mb going downwards as there
* is an ISA hole in the way.
*/
if (base <= 1024*1024)
low = 0;
/*
* See how much space we could cover by filling below
* the ISA hole
*/
if (floor == 0)
fspace = 512*1024;
else if (floor == 512*1024)
fspace = 128*1024;
/* And forget ROM space */
/*
* Now install the largest coverage we get
*/
if (fspace > high && fspace > low) {
centaur_mcr_insert(ct, floor, fspace, key);
floor += fspace;
} else if (high > low) {
centaur_mcr_insert(ct, top, high, key);
top += high;
} else if (low > 0) {
base -= low;
centaur_mcr_insert(ct, base, low, key);
} else
break;
ct++;
}
/*
* We loaded ct values. We now need to set the mask. The caller
* must do this bit.
*/
return ct;
}
static void centaur_create_optimal_mcr(void)
{
int used;
int i;
/*
* Allocate up to 6 mcrs to mark as much of ram as possible
* as write combining and weak write ordered.
*
* To experiment with: Linux never uses stack operations for
* mmio spaces so we could globally enable stack operation wc
*
* Load the registers with type 31 - full write combining, all
* writes weakly ordered.
*/
used = centaur_mcr_compute(6, 31);
/*
* Wipe unused MCRs
*/
for (i = used; i < 8; i++)
wrmsr(MSR_IDT_MCR0+i, 0, 0);
}
static void winchip2_create_optimal_mcr(void)
{
u32 lo, hi;
int used;
int i;
/*
* Allocate up to 6 mcrs to mark as much of ram as possible
* as write combining, weak store ordered.
*
* Load the registers with type 25
* 8 - weak write ordering
* 16 - weak read ordering
* 1 - write combining
*/
used = centaur_mcr_compute(6, 25);
/*
* Mark the registers we are using.
*/
rdmsr(MSR_IDT_MCR_CTRL, lo, hi);
for (i = 0; i < used; i++)
lo |= 1<<(9+i);
wrmsr(MSR_IDT_MCR_CTRL, lo, hi);
/*
* Wipe unused MCRs
*/
for (i = used; i < 8; i++)
wrmsr(MSR_IDT_MCR0+i, 0, 0);
}
/*
* Handle the MCR key on the Winchip 2.
*/
static void winchip2_unprotect_mcr(void)
{
u32 lo, hi;
u32 key;
rdmsr(MSR_IDT_MCR_CTRL, lo, hi);
lo &= ~0x1C0; /* blank bits 8-6 */
key = (lo>>17) & 7;
lo |= key<<6; /* replace with unlock key */
wrmsr(MSR_IDT_MCR_CTRL, lo, hi);
}
static void winchip2_protect_mcr(void)
{
u32 lo, hi;
rdmsr(MSR_IDT_MCR_CTRL, lo, hi);
lo &= ~0x1C0; /* blank bits 8-6 */
wrmsr(MSR_IDT_MCR_CTRL, lo, hi);
}
#endif /* CONFIG_X86_OOSTORE */
#define ACE_PRESENT (1 << 6)
#define ACE_ENABLED (1 << 7)
#define ACE_FCR (1 << 28) /* MSR_VIA_FCR */
@ -362,20 +132,6 @@ static void init_centaur(struct cpuinfo_x86 *c)
fcr_clr = DPDC;
printk(KERN_NOTICE "Disabling bugged TSC.\n");
clear_cpu_cap(c, X86_FEATURE_TSC);
#ifdef CONFIG_X86_OOSTORE
centaur_create_optimal_mcr();
/*
* Enable:
* write combining on non-stack, non-string
* write combining on string, all types
* weak write ordering
*
* The C6 original lacks weak read order
*
* Note 0x120 is write only on Winchip 1
*/
wrmsr(MSR_IDT_MCR_CTRL, 0x01F0001F, 0);
#endif
break;
case 8:
switch (c->x86_mask) {
@ -392,40 +148,12 @@ static void init_centaur(struct cpuinfo_x86 *c)
fcr_set = ECX8|DSMC|DTLOCK|EMMX|EBRPRED|ERETSTK|
E2MMX|EAMD3D;
fcr_clr = DPDC;
#ifdef CONFIG_X86_OOSTORE
winchip2_unprotect_mcr();
winchip2_create_optimal_mcr();
rdmsr(MSR_IDT_MCR_CTRL, lo, hi);
/*
* Enable:
* write combining on non-stack, non-string
* write combining on string, all types
* weak write ordering
*/
lo |= 31;
wrmsr(MSR_IDT_MCR_CTRL, lo, hi);
winchip2_protect_mcr();
#endif
break;
case 9:
name = "3";
fcr_set = ECX8|DSMC|DTLOCK|EMMX|EBRPRED|ERETSTK|
E2MMX|EAMD3D;
fcr_clr = DPDC;
#ifdef CONFIG_X86_OOSTORE
winchip2_unprotect_mcr();
winchip2_create_optimal_mcr();
rdmsr(MSR_IDT_MCR_CTRL, lo, hi);
/*
* Enable:
* write combining on non-stack, non-string
* write combining on string, all types
* weak write ordering
*/
lo |= 31;
wrmsr(MSR_IDT_MCR_CTRL, lo, hi);
winchip2_protect_mcr();
#endif
break;
default:
name = "??";

View file

@ -544,6 +544,10 @@ ENDPROC(early_idt_handlers)
/* This is global to keep gas from relaxing the jumps */
ENTRY(early_idt_handler)
cld
cmpl $2,(%esp) # X86_TRAP_NMI
je is_nmi # Ignore NMI
cmpl $2,%ss:early_recursion_flag
je hlt_loop
incl %ss:early_recursion_flag
@ -594,8 +598,9 @@ ex_entry:
pop %edx
pop %ecx
pop %eax
addl $8,%esp /* drop vector number and error code */
decl %ss:early_recursion_flag
is_nmi:
addl $8,%esp /* drop vector number and error code */
iret
ENDPROC(early_idt_handler)

View file

@ -343,6 +343,9 @@ early_idt_handlers:
ENTRY(early_idt_handler)
cld
cmpl $2,(%rsp) # X86_TRAP_NMI
je is_nmi # Ignore NMI
cmpl $2,early_recursion_flag(%rip)
jz 1f
incl early_recursion_flag(%rip)
@ -405,8 +408,9 @@ ENTRY(early_idt_handler)
popq %rdx
popq %rcx
popq %rax
addq $16,%rsp # drop vector number and error code
decl early_recursion_flag(%rip)
is_nmi:
addq $16,%rsp # drop vector number and error code
INTERRUPT_RETURN
ENDPROC(early_idt_handler)

View file

@ -86,10 +86,19 @@ EXPORT_SYMBOL(__kernel_fpu_begin);
void __kernel_fpu_end(void)
{
if (use_eager_fpu())
math_state_restore();
else
if (use_eager_fpu()) {
/*
* For eager fpu, most the time, tsk_used_math() is true.
* Restore the user math as we are done with the kernel usage.
* At few instances during thread exit, signal handling etc,
* tsk_used_math() is false. Those few places will take proper
* actions, so we don't need to restore the math here.
*/
if (likely(tsk_used_math(current)))
math_state_restore();
} else {
stts();
}
}
EXPORT_SYMBOL(__kernel_fpu_end);

View file

@ -529,7 +529,7 @@ static void quirk_amd_nb_node(struct pci_dev *dev)
return;
pci_read_config_dword(nb_ht, 0x60, &val);
node = val & 7;
node = pcibus_to_node(dev->bus) | (val & 7);
/*
* Some hardware may return an invalid node ID,
* so check it first:

View file

@ -1239,14 +1239,8 @@ void __init setup_arch(char **cmdline_p)
register_refined_jiffies(CLOCK_TICK_RATE);
#ifdef CONFIG_EFI
/* Once setup is done above, unmap the EFI memory map on
* mismatched firmware/kernel archtectures since there is no
* support for runtime services.
*/
if (efi_enabled(EFI_BOOT) && !efi_is_native()) {
pr_info("efi: Setup done, disabling due to 32/64-bit mismatch\n");
efi_unmap_memmap();
}
if (efi_enabled(EFI_BOOT))
efi_apply_memmap_quirks();
#endif
}

View file

@ -3002,10 +3002,8 @@ static int cr8_write_interception(struct vcpu_svm *svm)
u8 cr8_prev = kvm_get_cr8(&svm->vcpu);
/* instruction emulation calls kvm_set_cr8() */
r = cr_interception(svm);
if (irqchip_in_kernel(svm->vcpu.kvm)) {
clr_cr_intercept(svm, INTERCEPT_CR8_WRITE);
if (irqchip_in_kernel(svm->vcpu.kvm))
return r;
}
if (cr8_prev <= kvm_get_cr8(&svm->vcpu))
return r;
kvm_run->exit_reason = KVM_EXIT_SET_TPR;
@ -3567,6 +3565,8 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
if (is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK))
return;
clr_cr_intercept(svm, INTERCEPT_CR8_WRITE);
if (irr == -1)
return;

View file

@ -1020,13 +1020,17 @@ static inline bool smap_violation(int error_code, struct pt_regs *regs)
* This routine handles page faults. It determines the address,
* and the problem, and then passes it off to one of the appropriate
* routines.
*
* This function must have noinline because both callers
* {,trace_}do_page_fault() have notrace on. Having this an actual function
* guarantees there's a function trace entry.
*/
static void __kprobes
__do_page_fault(struct pt_regs *regs, unsigned long error_code)
static void __kprobes noinline
__do_page_fault(struct pt_regs *regs, unsigned long error_code,
unsigned long address)
{
struct vm_area_struct *vma;
struct task_struct *tsk;
unsigned long address;
struct mm_struct *mm;
int fault;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
@ -1034,9 +1038,6 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code)
tsk = current;
mm = tsk->mm;
/* Get the faulting address: */
address = read_cr2();
/*
* Detect and handle instructions that would cause a page fault for
* both a tracked kernel page and a userspace page.
@ -1248,32 +1249,50 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code)
up_read(&mm->mmap_sem);
}
dotraplinkage void __kprobes
dotraplinkage void __kprobes notrace
do_page_fault(struct pt_regs *regs, unsigned long error_code)
{
unsigned long address = read_cr2(); /* Get the faulting address */
enum ctx_state prev_state;
/*
* We must have this function tagged with __kprobes, notrace and call
* read_cr2() before calling anything else. To avoid calling any kind
* of tracing machinery before we've observed the CR2 value.
*
* exception_{enter,exit}() contain all sorts of tracepoints.
*/
prev_state = exception_enter();
__do_page_fault(regs, error_code);
__do_page_fault(regs, error_code, address);
exception_exit(prev_state);
}
static void trace_page_fault_entries(struct pt_regs *regs,
#ifdef CONFIG_TRACING
static void trace_page_fault_entries(unsigned long address, struct pt_regs *regs,
unsigned long error_code)
{
if (user_mode(regs))
trace_page_fault_user(read_cr2(), regs, error_code);
trace_page_fault_user(address, regs, error_code);
else
trace_page_fault_kernel(read_cr2(), regs, error_code);
trace_page_fault_kernel(address, regs, error_code);
}
dotraplinkage void __kprobes
dotraplinkage void __kprobes notrace
trace_do_page_fault(struct pt_regs *regs, unsigned long error_code)
{
/*
* The exception_enter and tracepoint processing could
* trigger another page faults (user space callchain
* reading) and destroy the original cr2 value, so read
* the faulting address now.
*/
unsigned long address = read_cr2();
enum ctx_state prev_state;
prev_state = exception_enter();
trace_page_fault_entries(regs, error_code);
__do_page_fault(regs, error_code);
trace_page_fault_entries(address, regs, error_code);
__do_page_fault(regs, error_code, address);
exception_exit(prev_state);
}
#endif /* CONFIG_TRACING */

View file

@ -140,7 +140,7 @@ bpf_slow_path_byte_msh:
push %r9; \
push SKBDATA; \
/* rsi already has offset */ \
mov $SIZE,%ecx; /* size */ \
mov $SIZE,%edx; /* size */ \
call bpf_internal_load_pointer_neg_helper; \
test %rax,%rax; \
pop SKBDATA; \

View file

@ -52,6 +52,7 @@
#include <asm/tlbflush.h>
#include <asm/x86_init.h>
#include <asm/rtc.h>
#include <asm/uv/uv.h>
#define EFI_DEBUG
@ -1210,3 +1211,22 @@ static int __init parse_efi_cmdline(char *str)
return 0;
}
early_param("efi", parse_efi_cmdline);
void __init efi_apply_memmap_quirks(void)
{
/*
* Once setup is done earlier, unmap the EFI memory map on mismatched
* firmware/kernel architectures since there is no support for runtime
* services.
*/
if (!efi_is_native()) {
pr_info("efi: Setup done, disabling due to 32/64-bit mismatch\n");
efi_unmap_memmap();
}
/*
* UV doesn't support the new EFI pagetable mapping yet.
*/
if (is_uv_system())
set_bit(EFI_OLD_MEMMAP, &x86_efi_facility);
}

View file

@ -40,11 +40,7 @@
#define smp_rmb() barrier()
#endif /* CONFIG_X86_PPRO_FENCE */
#ifdef CONFIG_X86_OOSTORE
#define smp_wmb() wmb()
#else /* CONFIG_X86_OOSTORE */
#define smp_wmb() barrier()
#endif /* CONFIG_X86_OOSTORE */
#define smp_read_barrier_depends() read_barrier_depends()
#define set_mb(var, value) do { (void)xchg(&var, value); } while (0)

View file

@ -65,7 +65,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
* be resued after dying flag is set
*/
if (q->mq_ops) {
blk_mq_insert_request(q, rq, at_head, true);
blk_mq_insert_request(rq, at_head, true, false);
return;
}

View file

@ -137,7 +137,7 @@ static void mq_flush_run(struct work_struct *work)
rq = container_of(work, struct request, mq_flush_work);
memset(&rq->csd, 0, sizeof(rq->csd));
blk_mq_run_request(rq, true, false);
blk_mq_insert_request(rq, false, true, false);
}
static bool blk_flush_queue_rq(struct request *rq)
@ -411,7 +411,7 @@ void blk_insert_flush(struct request *rq)
if ((policy & REQ_FSEQ_DATA) &&
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
if (q->mq_ops) {
blk_mq_run_request(rq, false, true);
blk_mq_insert_request(rq, false, false, true);
} else
list_add_tail(&rq->queuelist, &q->queue_head);
return;

View file

@ -11,7 +11,7 @@
#include "blk-mq.h"
static LIST_HEAD(blk_mq_cpu_notify_list);
static DEFINE_SPINLOCK(blk_mq_cpu_notify_lock);
static DEFINE_RAW_SPINLOCK(blk_mq_cpu_notify_lock);
static int blk_mq_main_cpu_notify(struct notifier_block *self,
unsigned long action, void *hcpu)
@ -19,12 +19,12 @@ static int blk_mq_main_cpu_notify(struct notifier_block *self,
unsigned int cpu = (unsigned long) hcpu;
struct blk_mq_cpu_notifier *notify;
spin_lock(&blk_mq_cpu_notify_lock);
raw_spin_lock(&blk_mq_cpu_notify_lock);
list_for_each_entry(notify, &blk_mq_cpu_notify_list, list)
notify->notify(notify->data, action, cpu);
spin_unlock(&blk_mq_cpu_notify_lock);
raw_spin_unlock(&blk_mq_cpu_notify_lock);
return NOTIFY_OK;
}
@ -32,16 +32,16 @@ void blk_mq_register_cpu_notifier(struct blk_mq_cpu_notifier *notifier)
{
BUG_ON(!notifier->notify);
spin_lock(&blk_mq_cpu_notify_lock);
raw_spin_lock(&blk_mq_cpu_notify_lock);
list_add_tail(&notifier->list, &blk_mq_cpu_notify_list);
spin_unlock(&blk_mq_cpu_notify_lock);
raw_spin_unlock(&blk_mq_cpu_notify_lock);
}
void blk_mq_unregister_cpu_notifier(struct blk_mq_cpu_notifier *notifier)
{
spin_lock(&blk_mq_cpu_notify_lock);
raw_spin_lock(&blk_mq_cpu_notify_lock);
list_del(&notifier->list);
spin_unlock(&blk_mq_cpu_notify_lock);
raw_spin_unlock(&blk_mq_cpu_notify_lock);
}
void blk_mq_init_cpu_notifier(struct blk_mq_cpu_notifier *notifier,

View file

@ -73,8 +73,8 @@ static void blk_mq_hctx_mark_pending(struct blk_mq_hw_ctx *hctx,
set_bit(ctx->index_hw, hctx->ctx_map);
}
static struct request *blk_mq_alloc_rq(struct blk_mq_hw_ctx *hctx, gfp_t gfp,
bool reserved)
static struct request *__blk_mq_alloc_request(struct blk_mq_hw_ctx *hctx,
gfp_t gfp, bool reserved)
{
struct request *rq;
unsigned int tag;
@ -193,12 +193,6 @@ static void blk_mq_rq_ctx_init(struct request_queue *q, struct blk_mq_ctx *ctx,
ctx->rq_dispatched[rw_is_sync(rw_flags)]++;
}
static struct request *__blk_mq_alloc_request(struct blk_mq_hw_ctx *hctx,
gfp_t gfp, bool reserved)
{
return blk_mq_alloc_rq(hctx, gfp, reserved);
}
static struct request *blk_mq_alloc_request_pinned(struct request_queue *q,
int rw, gfp_t gfp,
bool reserved)
@ -289,38 +283,10 @@ void blk_mq_free_request(struct request *rq)
__blk_mq_free_request(hctx, ctx, rq);
}
static void blk_mq_bio_endio(struct request *rq, struct bio *bio, int error)
bool blk_mq_end_io_partial(struct request *rq, int error, unsigned int nr_bytes)
{
if (error)
clear_bit(BIO_UPTODATE, &bio->bi_flags);
else if (!test_bit(BIO_UPTODATE, &bio->bi_flags))
error = -EIO;
if (unlikely(rq->cmd_flags & REQ_QUIET))
set_bit(BIO_QUIET, &bio->bi_flags);
/* don't actually finish bio if it's part of flush sequence */
if (!(rq->cmd_flags & REQ_FLUSH_SEQ))
bio_endio(bio, error);
}
void blk_mq_end_io(struct request *rq, int error)
{
struct bio *bio = rq->bio;
unsigned int bytes = 0;
trace_block_rq_complete(rq->q, rq);
while (bio) {
struct bio *next = bio->bi_next;
bio->bi_next = NULL;
bytes += bio->bi_iter.bi_size;
blk_mq_bio_endio(rq, bio, error);
bio = next;
}
blk_account_io_completion(rq, bytes);
if (blk_update_request(rq, error, blk_rq_bytes(rq)))
return true;
blk_account_io_done(rq);
@ -328,8 +294,9 @@ void blk_mq_end_io(struct request *rq, int error)
rq->end_io(rq, error);
else
blk_mq_free_request(rq);
return false;
}
EXPORT_SYMBOL(blk_mq_end_io);
EXPORT_SYMBOL(blk_mq_end_io_partial);
static void __blk_mq_complete_request_remote(void *data)
{
@ -730,60 +697,27 @@ static void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx,
blk_mq_add_timer(rq);
}
void blk_mq_insert_request(struct request_queue *q, struct request *rq,
bool at_head, bool run_queue)
{
struct blk_mq_hw_ctx *hctx;
struct blk_mq_ctx *ctx, *current_ctx;
ctx = rq->mq_ctx;
hctx = q->mq_ops->map_queue(q, ctx->cpu);
if (rq->cmd_flags & (REQ_FLUSH | REQ_FUA)) {
blk_insert_flush(rq);
} else {
current_ctx = blk_mq_get_ctx(q);
if (!cpu_online(ctx->cpu)) {
ctx = current_ctx;
hctx = q->mq_ops->map_queue(q, ctx->cpu);
rq->mq_ctx = ctx;
}
spin_lock(&ctx->lock);
__blk_mq_insert_request(hctx, rq, at_head);
spin_unlock(&ctx->lock);
blk_mq_put_ctx(current_ctx);
}
if (run_queue)
__blk_mq_run_hw_queue(hctx);
}
EXPORT_SYMBOL(blk_mq_insert_request);
/*
* This is a special version of blk_mq_insert_request to bypass FLUSH request
* check. Should only be used internally.
*/
void blk_mq_run_request(struct request *rq, bool run_queue, bool async)
void blk_mq_insert_request(struct request *rq, bool at_head, bool run_queue,
bool async)
{
struct request_queue *q = rq->q;
struct blk_mq_hw_ctx *hctx;
struct blk_mq_ctx *ctx, *current_ctx;
struct blk_mq_ctx *ctx = rq->mq_ctx, *current_ctx;
current_ctx = blk_mq_get_ctx(q);
if (!cpu_online(ctx->cpu))
rq->mq_ctx = ctx = current_ctx;
ctx = rq->mq_ctx;
if (!cpu_online(ctx->cpu)) {
ctx = current_ctx;
rq->mq_ctx = ctx;
}
hctx = q->mq_ops->map_queue(q, ctx->cpu);
/* ctx->cpu might be offline */
spin_lock(&ctx->lock);
__blk_mq_insert_request(hctx, rq, false);
spin_unlock(&ctx->lock);
if (rq->cmd_flags & (REQ_FLUSH | REQ_FUA) &&
!(rq->cmd_flags & (REQ_FLUSH_SEQ))) {
blk_insert_flush(rq);
} else {
spin_lock(&ctx->lock);
__blk_mq_insert_request(hctx, rq, at_head);
spin_unlock(&ctx->lock);
}
blk_mq_put_ctx(current_ctx);
@ -926,6 +860,8 @@ static void blk_mq_make_request(struct request_queue *q, struct bio *bio)
ctx = blk_mq_get_ctx(q);
hctx = q->mq_ops->map_queue(q, ctx->cpu);
if (is_sync)
rw |= REQ_SYNC;
trace_block_getrq(q, bio, rw);
rq = __blk_mq_alloc_request(hctx, GFP_ATOMIC, false);
if (likely(rq))

View file

@ -23,7 +23,6 @@ struct blk_mq_ctx {
};
void __blk_mq_complete_request(struct request *rq);
void blk_mq_run_request(struct request *rq, bool run_queue, bool async);
void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async);
void blk_mq_init_flush(struct request_queue *q);
void blk_mq_drain_queue(struct request_queue *q);

View file

@ -67,6 +67,8 @@ enum ec_command {
#define ACPI_EC_DELAY 500 /* Wait 500ms max. during EC ops */
#define ACPI_EC_UDELAY_GLK 1000 /* Wait 1ms max. to get global lock */
#define ACPI_EC_MSI_UDELAY 550 /* Wait 550us for MSI EC */
#define ACPI_EC_CLEAR_MAX 100 /* Maximum number of events to query
* when trying to clear the EC */
enum {
EC_FLAGS_QUERY_PENDING, /* Query is pending */
@ -116,6 +118,7 @@ EXPORT_SYMBOL(first_ec);
static int EC_FLAGS_MSI; /* Out-of-spec MSI controller */
static int EC_FLAGS_VALIDATE_ECDT; /* ASUStec ECDTs need to be validated */
static int EC_FLAGS_SKIP_DSDT_SCAN; /* Not all BIOS survive early DSDT scan */
static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */
/* --------------------------------------------------------------------------
Transaction Management
@ -440,6 +443,29 @@ acpi_handle ec_get_handle(void)
EXPORT_SYMBOL(ec_get_handle);
static int acpi_ec_query_unlocked(struct acpi_ec *ec, u8 *data);
/*
* Clears stale _Q events that might have accumulated in the EC.
* Run with locked ec mutex.
*/
static void acpi_ec_clear(struct acpi_ec *ec)
{
int i, status;
u8 value = 0;
for (i = 0; i < ACPI_EC_CLEAR_MAX; i++) {
status = acpi_ec_query_unlocked(ec, &value);
if (status || !value)
break;
}
if (unlikely(i == ACPI_EC_CLEAR_MAX))
pr_warn("Warning: Maximum of %d stale EC events cleared\n", i);
else
pr_info("%d stale EC events cleared\n", i);
}
void acpi_ec_block_transactions(void)
{
struct acpi_ec *ec = first_ec;
@ -463,6 +489,10 @@ void acpi_ec_unblock_transactions(void)
mutex_lock(&ec->mutex);
/* Allow transactions to be carried out again */
clear_bit(EC_FLAGS_BLOCKED, &ec->flags);
if (EC_FLAGS_CLEAR_ON_RESUME)
acpi_ec_clear(ec);
mutex_unlock(&ec->mutex);
}
@ -821,6 +851,13 @@ static int acpi_ec_add(struct acpi_device *device)
/* EC is fully operational, allow queries */
clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags);
/* Clear stale _Q events if hardware might require that */
if (EC_FLAGS_CLEAR_ON_RESUME) {
mutex_lock(&ec->mutex);
acpi_ec_clear(ec);
mutex_unlock(&ec->mutex);
}
return ret;
}
@ -922,6 +959,30 @@ static int ec_enlarge_storm_threshold(const struct dmi_system_id *id)
return 0;
}
/*
* On some hardware it is necessary to clear events accumulated by the EC during
* sleep. These ECs stop reporting GPEs until they are manually polled, if too
* many events are accumulated. (e.g. Samsung Series 5/9 notebooks)
*
* https://bugzilla.kernel.org/show_bug.cgi?id=44161
*
* Ideally, the EC should also be instructed NOT to accumulate events during
* sleep (which Windows seems to do somehow), but the interface to control this
* behaviour is not known at this time.
*
* Models known to be affected are Samsung 530Uxx/535Uxx/540Uxx/550Pxx/900Xxx,
* however it is very likely that other Samsung models are affected.
*
* On systems which don't accumulate _Q events during sleep, this extra check
* should be harmless.
*/
static int ec_clear_on_resume(const struct dmi_system_id *id)
{
pr_debug("Detected system needing EC poll on resume.\n");
EC_FLAGS_CLEAR_ON_RESUME = 1;
return 0;
}
static struct dmi_system_id ec_dmi_table[] __initdata = {
{
ec_skip_dsdt_scan, "Compal JFL92", {
@ -965,6 +1026,9 @@ static struct dmi_system_id ec_dmi_table[] __initdata = {
ec_validate_ecdt, "ASUS hardware", {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTek Computer Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "L4R"),}, NULL},
{
ec_clear_on_resume, "Samsung hardware", {
DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD.")}, NULL},
{},
};

View file

@ -77,18 +77,24 @@ bool acpi_dev_resource_memory(struct acpi_resource *ares, struct resource *res)
switch (ares->type) {
case ACPI_RESOURCE_TYPE_MEMORY24:
memory24 = &ares->data.memory24;
if (!memory24->address_length)
return false;
acpi_dev_get_memresource(res, memory24->minimum,
memory24->address_length,
memory24->write_protect);
break;
case ACPI_RESOURCE_TYPE_MEMORY32:
memory32 = &ares->data.memory32;
if (!memory32->address_length)
return false;
acpi_dev_get_memresource(res, memory32->minimum,
memory32->address_length,
memory32->write_protect);
break;
case ACPI_RESOURCE_TYPE_FIXED_MEMORY32:
fixed_memory32 = &ares->data.fixed_memory32;
if (!fixed_memory32->address_length)
return false;
acpi_dev_get_memresource(res, fixed_memory32->address,
fixed_memory32->address_length,
fixed_memory32->write_protect);
@ -144,12 +150,16 @@ bool acpi_dev_resource_io(struct acpi_resource *ares, struct resource *res)
switch (ares->type) {
case ACPI_RESOURCE_TYPE_IO:
io = &ares->data.io;
if (!io->address_length)
return false;
acpi_dev_get_ioresource(res, io->minimum,
io->address_length,
io->io_decode);
break;
case ACPI_RESOURCE_TYPE_FIXED_IO:
fixed_io = &ares->data.fixed_io;
if (!fixed_io->address_length)
return false;
acpi_dev_get_ioresource(res, fixed_io->address,
fixed_io->address_length,
ACPI_DECODE_10);

View file

@ -71,6 +71,17 @@ static int acpi_sleep_prepare(u32 acpi_state)
return 0;
}
static bool acpi_sleep_state_supported(u8 sleep_state)
{
acpi_status status;
u8 type_a, type_b;
status = acpi_get_sleep_type_data(sleep_state, &type_a, &type_b);
return ACPI_SUCCESS(status) && (!acpi_gbl_reduced_hardware
|| (acpi_gbl_FADT.sleep_control.address
&& acpi_gbl_FADT.sleep_status.address));
}
#ifdef CONFIG_ACPI_SLEEP
static u32 acpi_target_sleep_state = ACPI_STATE_S0;
@ -604,15 +615,9 @@ static void acpi_sleep_suspend_setup(void)
{
int i;
for (i = ACPI_STATE_S1; i < ACPI_STATE_S4; i++) {
acpi_status status;
u8 type_a, type_b;
status = acpi_get_sleep_type_data(i, &type_a, &type_b);
if (ACPI_SUCCESS(status)) {
for (i = ACPI_STATE_S1; i < ACPI_STATE_S4; i++)
if (acpi_sleep_state_supported(i))
sleep_states[i] = 1;
}
}
suspend_set_ops(old_suspend_ordering ?
&acpi_suspend_ops_old : &acpi_suspend_ops);
@ -740,11 +745,7 @@ static const struct platform_hibernation_ops acpi_hibernation_ops_old = {
static void acpi_sleep_hibernate_setup(void)
{
acpi_status status;
u8 type_a, type_b;
status = acpi_get_sleep_type_data(ACPI_STATE_S4, &type_a, &type_b);
if (ACPI_FAILURE(status))
if (!acpi_sleep_state_supported(ACPI_STATE_S4))
return;
hibernation_set_ops(old_suspend_ordering ?
@ -793,8 +794,6 @@ static void acpi_power_off(void)
int __init acpi_sleep_init(void)
{
acpi_status status;
u8 type_a, type_b;
char supported[ACPI_S_STATE_COUNT * 3 + 1];
char *pos = supported;
int i;
@ -806,8 +805,7 @@ int __init acpi_sleep_init(void)
acpi_sleep_suspend_setup();
acpi_sleep_hibernate_setup();
status = acpi_get_sleep_type_data(ACPI_STATE_S5, &type_a, &type_b);
if (ACPI_SUCCESS(status)) {
if (acpi_sleep_state_supported(ACPI_STATE_S5)) {
sleep_states[ACPI_STATE_S5] = 1;
pm_power_off_prepare = acpi_power_off_prepare;
pm_power_off = acpi_power_off;

View file

@ -4175,6 +4175,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
/* Seagate Momentus SpinPoint M8 seem to have FPMDA_AA issues */
{ "ST1000LM024 HN-M101MBB", "2AR10001", ATA_HORKAGE_BROKEN_FPDMA_AA },
{ "ST1000LM024 HN-M101MBB", "2BA30001", ATA_HORKAGE_BROKEN_FPDMA_AA },
/* Blacklist entries taken from Silicon Image 3124/3132
Windows driver .inf file - also several Linux problem reports */
@ -4224,7 +4225,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
/* devices that don't properly handle queued TRIM commands */
{ "Micron_M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
{ "Crucial_CT???M500SSD1", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
{ "Crucial_CT???M500SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
/*
* Some WD SATA-I drives spin up and down erratically when the link

View file

@ -53,7 +53,7 @@
#define MTIP_FTL_REBUILD_TIMEOUT_MS 2400000
/* unaligned IO handling */
#define MTIP_MAX_UNALIGNED_SLOTS 8
#define MTIP_MAX_UNALIGNED_SLOTS 2
/* Macro to extract the tag bit number from a tag value. */
#define MTIP_TAG_BIT(tag) (tag & 0x1F)

View file

@ -26,6 +26,8 @@ struct rcar_gen2_cpg {
void __iomem *reg;
};
#define CPG_FRQCRB 0x00000004
#define CPG_FRQCRB_KICK BIT(31)
#define CPG_SDCKCR 0x00000074
#define CPG_PLL0CR 0x000000d8
#define CPG_FRQCRC 0x000000e0
@ -45,6 +47,7 @@ struct rcar_gen2_cpg {
struct cpg_z_clk {
struct clk_hw hw;
void __iomem *reg;
void __iomem *kick_reg;
};
#define to_z_clk(_hw) container_of(_hw, struct cpg_z_clk, hw)
@ -83,17 +86,45 @@ static int cpg_z_clk_set_rate(struct clk_hw *hw, unsigned long rate,
{
struct cpg_z_clk *zclk = to_z_clk(hw);
unsigned int mult;
u32 val;
u32 val, kick;
unsigned int i;
mult = div_u64((u64)rate * 32, parent_rate);
mult = clamp(mult, 1U, 32U);
if (clk_readl(zclk->kick_reg) & CPG_FRQCRB_KICK)
return -EBUSY;
val = clk_readl(zclk->reg);
val &= ~CPG_FRQCRC_ZFC_MASK;
val |= (32 - mult) << CPG_FRQCRC_ZFC_SHIFT;
clk_writel(val, zclk->reg);
return 0;
/*
* Set KICK bit in FRQCRB to update hardware setting and wait for
* clock change completion.
*/
kick = clk_readl(zclk->kick_reg);
kick |= CPG_FRQCRB_KICK;
clk_writel(kick, zclk->kick_reg);
/*
* Note: There is no HW information about the worst case latency.
*
* Using experimental measurements, it seems that no more than
* ~10 iterations are needed, independently of the CPU rate.
* Since this value might be dependant of external xtal rate, pll1
* rate or even the other emulation clocks rate, use 1000 as a
* "super" safe value.
*/
for (i = 1000; i; i--) {
if (!(clk_readl(zclk->kick_reg) & CPG_FRQCRB_KICK))
return 0;
cpu_relax();
}
return -ETIMEDOUT;
}
static const struct clk_ops cpg_z_clk_ops = {
@ -120,6 +151,7 @@ static struct clk * __init cpg_z_clk_register(struct rcar_gen2_cpg *cpg)
init.num_parents = 1;
zclk->reg = cpg->reg + CPG_FRQCRC;
zclk->kick_reg = cpg->reg + CPG_FRQCRB;
zclk->hw.init = &init;
clk = clk_register(NULL, &zclk->hw);

View file

@ -1109,12 +1109,27 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
goto err_set_policy_cpu;
}
/* related cpus should atleast have policy->cpus */
cpumask_or(policy->related_cpus, policy->related_cpus, policy->cpus);
/*
* affected cpus must always be the one, which are online. We aren't
* managing offline cpus here.
*/
cpumask_and(policy->cpus, policy->cpus, cpu_online_mask);
if (!frozen) {
policy->user_policy.min = policy->min;
policy->user_policy.max = policy->max;
}
down_write(&policy->rwsem);
write_lock_irqsave(&cpufreq_driver_lock, flags);
for_each_cpu(j, policy->cpus)
per_cpu(cpufreq_cpu_data, j) = policy;
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
if (cpufreq_driver->get) {
if (cpufreq_driver->get && !cpufreq_driver->setpolicy) {
policy->cur = cpufreq_driver->get(policy->cpu);
if (!policy->cur) {
pr_err("%s: ->get() failed\n", __func__);
@ -1162,20 +1177,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
}
}
/* related cpus should atleast have policy->cpus */
cpumask_or(policy->related_cpus, policy->related_cpus, policy->cpus);
/*
* affected cpus must always be the one, which are online. We aren't
* managing offline cpus here.
*/
cpumask_and(policy->cpus, policy->cpus, cpu_online_mask);
if (!frozen) {
policy->user_policy.min = policy->min;
policy->user_policy.max = policy->max;
}
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_START, policy);
@ -1206,6 +1207,7 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
policy->user_policy.policy = policy->policy;
policy->user_policy.governor = policy->governor;
}
up_write(&policy->rwsem);
kobject_uevent(&policy->kobj, KOBJ_ADD);
up_read(&cpufreq_rwsem);
@ -1546,23 +1548,16 @@ static unsigned int __cpufreq_get(unsigned int cpu)
*/
unsigned int cpufreq_get(unsigned int cpu)
{
struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu);
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
unsigned int ret_freq = 0;
if (cpufreq_disabled() || !cpufreq_driver)
return -ENOENT;
if (policy) {
down_read(&policy->rwsem);
ret_freq = __cpufreq_get(cpu);
up_read(&policy->rwsem);
BUG_ON(!policy);
if (!down_read_trylock(&cpufreq_rwsem))
return 0;
down_read(&policy->rwsem);
ret_freq = __cpufreq_get(cpu);
up_read(&policy->rwsem);
up_read(&cpufreq_rwsem);
cpufreq_cpu_put(policy);
}
return ret_freq;
}
@ -2148,7 +2143,7 @@ int cpufreq_update_policy(unsigned int cpu)
* BIOS might change freq behind our back
* -> ask driver for current freq and notify governors about a change
*/
if (cpufreq_driver->get) {
if (cpufreq_driver->get && !cpufreq_driver->setpolicy) {
new_policy.cur = cpufreq_driver->get(cpu);
if (!policy->cur) {
pr_debug("Driver did not initialize current freq");

View file

@ -916,7 +916,7 @@ static int lookup_existing_device(struct device *dev, void *data)
old->config_rom_retries = 0;
fw_notice(card, "rediscovered device %s\n", dev_name(dev));
PREPARE_DELAYED_WORK(&old->work, fw_device_update);
old->workfn = fw_device_update;
fw_schedule_device_work(old, 0);
if (current_node == card->root_node)
@ -1075,7 +1075,7 @@ static void fw_device_init(struct work_struct *work)
if (atomic_cmpxchg(&device->state,
FW_DEVICE_INITIALIZING,
FW_DEVICE_RUNNING) == FW_DEVICE_GONE) {
PREPARE_DELAYED_WORK(&device->work, fw_device_shutdown);
device->workfn = fw_device_shutdown;
fw_schedule_device_work(device, SHUTDOWN_DELAY);
} else {
fw_notice(card, "created device %s: GUID %08x%08x, S%d00\n",
@ -1196,13 +1196,20 @@ static void fw_device_refresh(struct work_struct *work)
dev_name(&device->device), fw_rcode_string(ret));
gone:
atomic_set(&device->state, FW_DEVICE_GONE);
PREPARE_DELAYED_WORK(&device->work, fw_device_shutdown);
device->workfn = fw_device_shutdown;
fw_schedule_device_work(device, SHUTDOWN_DELAY);
out:
if (node_id == card->root_node->node_id)
fw_schedule_bm_work(card, 0);
}
static void fw_device_workfn(struct work_struct *work)
{
struct fw_device *device = container_of(to_delayed_work(work),
struct fw_device, work);
device->workfn(work);
}
void fw_node_event(struct fw_card *card, struct fw_node *node, int event)
{
struct fw_device *device;
@ -1252,7 +1259,8 @@ void fw_node_event(struct fw_card *card, struct fw_node *node, int event)
* power-up after getting plugged in. We schedule the
* first config rom scan half a second after bus reset.
*/
INIT_DELAYED_WORK(&device->work, fw_device_init);
device->workfn = fw_device_init;
INIT_DELAYED_WORK(&device->work, fw_device_workfn);
fw_schedule_device_work(device, INITIAL_DELAY);
break;
@ -1268,7 +1276,7 @@ void fw_node_event(struct fw_card *card, struct fw_node *node, int event)
if (atomic_cmpxchg(&device->state,
FW_DEVICE_RUNNING,
FW_DEVICE_INITIALIZING) == FW_DEVICE_RUNNING) {
PREPARE_DELAYED_WORK(&device->work, fw_device_refresh);
device->workfn = fw_device_refresh;
fw_schedule_device_work(device,
device->is_local ? 0 : INITIAL_DELAY);
}
@ -1283,7 +1291,7 @@ void fw_node_event(struct fw_card *card, struct fw_node *node, int event)
smp_wmb(); /* update node_id before generation */
device->generation = card->generation;
if (atomic_read(&device->state) == FW_DEVICE_RUNNING) {
PREPARE_DELAYED_WORK(&device->work, fw_device_update);
device->workfn = fw_device_update;
fw_schedule_device_work(device, 0);
}
break;
@ -1308,7 +1316,7 @@ void fw_node_event(struct fw_card *card, struct fw_node *node, int event)
device = node->data;
if (atomic_xchg(&device->state,
FW_DEVICE_GONE) == FW_DEVICE_RUNNING) {
PREPARE_DELAYED_WORK(&device->work, fw_device_shutdown);
device->workfn = fw_device_shutdown;
fw_schedule_device_work(device,
list_empty(&card->link) ? 0 : SHUTDOWN_DELAY);
}

View file

@ -929,8 +929,6 @@ static void fwnet_write_complete(struct fw_card *card, int rcode,
if (rcode == RCODE_COMPLETE) {
fwnet_transmit_packet_done(ptask);
} else {
fwnet_transmit_packet_failed(ptask);
if (printk_timed_ratelimit(&j, 1000) || rcode != last_rcode) {
dev_err(&ptask->dev->netdev->dev,
"fwnet_write_complete failed: %x (skipped %d)\n",
@ -938,8 +936,10 @@ static void fwnet_write_complete(struct fw_card *card, int rcode,
errors_skipped = 0;
last_rcode = rcode;
} else
} else {
errors_skipped++;
}
fwnet_transmit_packet_failed(ptask);
}
}

View file

@ -290,7 +290,6 @@ static char ohci_driver_name[] = KBUILD_MODNAME;
#define QUIRK_NO_MSI 0x10
#define QUIRK_TI_SLLZ059 0x20
#define QUIRK_IR_WAKE 0x40
#define QUIRK_PHY_LCTRL_TIMEOUT 0x80
/* In case of multiple matches in ohci_quirks[], only the first one is used. */
static const struct {
@ -303,10 +302,7 @@ static const struct {
QUIRK_BE_HEADERS},
{PCI_VENDOR_ID_ATT, PCI_DEVICE_ID_AGERE_FW643, 6,
QUIRK_PHY_LCTRL_TIMEOUT | QUIRK_NO_MSI},
{PCI_VENDOR_ID_ATT, PCI_ANY_ID, PCI_ANY_ID,
QUIRK_PHY_LCTRL_TIMEOUT},
QUIRK_NO_MSI},
{PCI_VENDOR_ID_CREATIVE, PCI_DEVICE_ID_CREATIVE_SB1394, PCI_ANY_ID,
QUIRK_RESET_PACKET},
@ -353,7 +349,6 @@ MODULE_PARM_DESC(quirks, "Chip quirks (default = 0"
", disable MSI = " __stringify(QUIRK_NO_MSI)
", TI SLLZ059 erratum = " __stringify(QUIRK_TI_SLLZ059)
", IR wake unreliable = " __stringify(QUIRK_IR_WAKE)
", phy LCtrl timeout = " __stringify(QUIRK_PHY_LCTRL_TIMEOUT)
")");
#define OHCI_PARAM_DEBUG_AT_AR 1
@ -2299,9 +2294,6 @@ static int ohci_enable(struct fw_card *card,
* TI TSB82AA2 + TSB81BA3(A) cards signal LPS enabled early but
* cannot actually use the phy at that time. These need tens of
* millisecods pause between LPS write and first phy access too.
*
* But do not wait for 50msec on Agere/LSI cards. Their phy
* arbitration state machine may time out during such a long wait.
*/
reg_write(ohci, OHCI1394_HCControlSet,
@ -2309,11 +2301,8 @@ static int ohci_enable(struct fw_card *card,
OHCI1394_HCControl_postedWriteEnable);
flush_writes(ohci);
if (!(ohci->quirks & QUIRK_PHY_LCTRL_TIMEOUT))
for (lps = 0, i = 0; !lps && i < 3; i++) {
msleep(50);
for (lps = 0, i = 0; !lps && i < 150; i++) {
msleep(1);
lps = reg_read(ohci, OHCI1394_HCControlSet) &
OHCI1394_HCControl_LPS;
}

View file

@ -146,6 +146,7 @@ struct sbp2_logical_unit {
*/
int generation;
int retries;
work_func_t workfn;
struct delayed_work work;
bool has_sdev;
bool blocked;
@ -864,7 +865,7 @@ static void sbp2_login(struct work_struct *work)
/* set appropriate retry limit(s) in BUSY_TIMEOUT register */
sbp2_set_busy_timeout(lu);
PREPARE_DELAYED_WORK(&lu->work, sbp2_reconnect);
lu->workfn = sbp2_reconnect;
sbp2_agent_reset(lu);
/* This was a re-login. */
@ -918,7 +919,7 @@ static void sbp2_login(struct work_struct *work)
* If a bus reset happened, sbp2_update will have requeued
* lu->work already. Reset the work from reconnect to login.
*/
PREPARE_DELAYED_WORK(&lu->work, sbp2_login);
lu->workfn = sbp2_login;
}
static void sbp2_reconnect(struct work_struct *work)
@ -952,7 +953,7 @@ static void sbp2_reconnect(struct work_struct *work)
lu->retries++ >= 5) {
dev_err(tgt_dev(tgt), "failed to reconnect\n");
lu->retries = 0;
PREPARE_DELAYED_WORK(&lu->work, sbp2_login);
lu->workfn = sbp2_login;
}
sbp2_queue_work(lu, DIV_ROUND_UP(HZ, 5));
@ -972,6 +973,13 @@ static void sbp2_reconnect(struct work_struct *work)
sbp2_conditionally_unblock(lu);
}
static void sbp2_lu_workfn(struct work_struct *work)
{
struct sbp2_logical_unit *lu = container_of(to_delayed_work(work),
struct sbp2_logical_unit, work);
lu->workfn(work);
}
static int sbp2_add_logical_unit(struct sbp2_target *tgt, int lun_entry)
{
struct sbp2_logical_unit *lu;
@ -998,7 +1006,8 @@ static int sbp2_add_logical_unit(struct sbp2_target *tgt, int lun_entry)
lu->blocked = false;
++tgt->dont_block;
INIT_LIST_HEAD(&lu->orb_list);
INIT_DELAYED_WORK(&lu->work, sbp2_login);
lu->workfn = sbp2_login;
INIT_DELAYED_WORK(&lu->work, sbp2_lu_workfn);
list_add_tail(&lu->link, &tgt->lu_list);
return 0;

View file

@ -68,15 +68,7 @@ void __armada_drm_queue_unref_work(struct drm_device *dev,
{
struct armada_private *priv = dev->dev_private;
/*
* Yes, we really must jump through these hoops just to store a
* _pointer_ to something into the kfifo. This is utterly insane
* and idiotic, because it kfifo requires the _data_ pointed to by
* the pointer const, not the pointer itself. Not only that, but
* you have to pass a pointer _to_ the pointer you want stored.
*/
const struct drm_framebuffer *silly_api_alert = fb;
WARN_ON(!kfifo_put(&priv->fb_unref, &silly_api_alert));
WARN_ON(!kfifo_put(&priv->fb_unref, fb));
schedule_work(&priv->fb_unref_work);
}

View file

@ -2,6 +2,7 @@ config DRM_BOCHS
tristate "DRM Support for bochs dispi vga interface (qemu stdvga)"
depends on DRM && PCI
select DRM_KMS_HELPER
select DRM_KMS_FB_HELPER
select FB_SYS_FILLRECT
select FB_SYS_COPYAREA
select FB_SYS_IMAGEBLIT

View file

@ -403,7 +403,7 @@ MODULE_DEVICE_TABLE(pci, pciidlist);
void intel_detect_pch(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct pci_dev *pch;
struct pci_dev *pch = NULL;
/* In all current cases, num_pipes is equivalent to the PCH_NOP setting
* (which really amounts to a PCH but no South Display).
@ -424,12 +424,9 @@ void intel_detect_pch(struct drm_device *dev)
* all the ISA bridge devices and check for the first match, instead
* of only checking the first one.
*/
pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
while (pch) {
struct pci_dev *curr = pch;
while ((pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, pch))) {
if (pch->vendor == PCI_VENDOR_ID_INTEL) {
unsigned short id;
id = pch->device & INTEL_PCH_DEVICE_ID_MASK;
unsigned short id = pch->device & INTEL_PCH_DEVICE_ID_MASK;
dev_priv->pch_id = id;
if (id == INTEL_PCH_IBX_DEVICE_ID_TYPE) {
@ -461,18 +458,16 @@ void intel_detect_pch(struct drm_device *dev)
DRM_DEBUG_KMS("Found LynxPoint LP PCH\n");
WARN_ON(!IS_HASWELL(dev));
WARN_ON(!IS_ULT(dev));
} else {
goto check_next;
}
pci_dev_put(pch);
} else
continue;
break;
}
check_next:
pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, curr);
pci_dev_put(curr);
}
if (!pch)
DRM_DEBUG_KMS("No PCH found?\n");
DRM_DEBUG_KMS("No PCH found.\n");
pci_dev_put(pch);
}
bool i915_semaphore_is_enabled(struct drm_device *dev)

View file

@ -82,9 +82,22 @@ static unsigned long i915_stolen_to_physical(struct drm_device *dev)
r = devm_request_mem_region(dev->dev, base, dev_priv->gtt.stolen_size,
"Graphics Stolen Memory");
if (r == NULL) {
DRM_ERROR("conflict detected with stolen region: [0x%08x - 0x%08x]\n",
base, base + (uint32_t)dev_priv->gtt.stolen_size);
base = 0;
/*
* One more attempt but this time requesting region from
* base + 1, as we have seen that this resolves the region
* conflict with the PCI Bus.
* This is a BIOS w/a: Some BIOS wrap stolen in the root
* PCI bus, but have an off-by-one error. Hence retry the
* reservation starting from 1 instead of 0.
*/
r = devm_request_mem_region(dev->dev, base + 1,
dev_priv->gtt.stolen_size - 1,
"Graphics Stolen Memory");
if (r == NULL) {
DRM_ERROR("conflict detected with stolen region: [0x%08x - 0x%08x]\n",
base, base + (uint32_t)dev_priv->gtt.stolen_size);
base = 0;
}
}
return base;

View file

@ -1092,12 +1092,12 @@ static void assert_cursor(struct drm_i915_private *dev_priv,
struct drm_device *dev = dev_priv->dev;
bool cur_state;
if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev))
cur_state = I915_READ(CURCNTR_IVB(pipe)) & CURSOR_MODE;
else if (IS_845G(dev) || IS_I865G(dev))
if (IS_845G(dev) || IS_I865G(dev))
cur_state = I915_READ(_CURACNTR) & CURSOR_ENABLE;
else
else if (INTEL_INFO(dev)->gen <= 6 || IS_VALLEYVIEW(dev))
cur_state = I915_READ(CURCNTR(pipe)) & CURSOR_MODE;
else
cur_state = I915_READ(CURCNTR_IVB(pipe)) & CURSOR_MODE;
WARN(cur_state != state,
"cursor on pipe %c assertion failure (expected %s, current %s)\n",

View file

@ -845,7 +845,7 @@ static int hdmi_portclock_limit(struct intel_hdmi *hdmi)
{
struct drm_device *dev = intel_hdmi_to_dev(hdmi);
if (IS_G4X(dev))
if (!hdmi->has_hdmi_sink || IS_G4X(dev))
return 165000;
else if (IS_HASWELL(dev) || INTEL_INFO(dev)->gen >= 8)
return 300000;
@ -899,8 +899,8 @@ bool intel_hdmi_compute_config(struct intel_encoder *encoder,
* outputs. We also need to check that the higher clock still fits
* within limits.
*/
if (pipe_config->pipe_bpp > 8*3 && clock_12bpc <= portclock_limit
&& HAS_PCH_SPLIT(dev)) {
if (pipe_config->pipe_bpp > 8*3 && intel_hdmi->has_hdmi_sink &&
clock_12bpc <= portclock_limit && HAS_PCH_SPLIT(dev)) {
DRM_DEBUG_KMS("picking bpc to 12 for HDMI output\n");
desired_bpp = 12*3;

View file

@ -698,7 +698,7 @@ static void i9xx_enable_backlight(struct intel_connector *connector)
freq /= 0xff;
ctl = freq << 17;
if (IS_GEN2(dev) && panel->backlight.combination_mode)
if (panel->backlight.combination_mode)
ctl |= BLM_LEGACY_MODE;
if (IS_PINEVIEW(dev) && panel->backlight.active_low_pwm)
ctl |= BLM_POLARITY_PNV;
@ -979,7 +979,7 @@ static int i9xx_setup_backlight(struct intel_connector *connector)
ctl = I915_READ(BLC_PWM_CTL);
if (IS_GEN2(dev))
if (IS_GEN2(dev) || IS_I915GM(dev) || IS_I945GM(dev))
panel->backlight.combination_mode = ctl & BLM_LEGACY_MODE;
if (IS_PINEVIEW(dev))

View file

@ -3493,6 +3493,8 @@ static void valleyview_setup_pctx(struct drm_device *dev)
u32 pcbr;
int pctx_size = 24*1024;
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
pcbr = I915_READ(VLV_PCBR);
if (pcbr) {
/* BIOS set it up already, grab the pre-alloc'd space */
@ -3542,8 +3544,6 @@ static void valleyview_enable_rps(struct drm_device *dev)
I915_WRITE(GTFIFODBG, gtfifodbg);
}
valleyview_setup_pctx(dev);
/* If VLV, Forcewake all wells, else re-direct to regular path */
gen6_gt_force_wake_get(dev_priv, FORCEWAKE_ALL);
@ -4395,6 +4395,8 @@ void intel_enable_gt_powersave(struct drm_device *dev)
ironlake_enable_rc6(dev);
intel_init_emon(dev);
} else if (IS_GEN6(dev) || IS_GEN7(dev)) {
if (IS_VALLEYVIEW(dev))
valleyview_setup_pctx(dev);
/*
* PCU communication is slow and this doesn't need to be
* done at any specific time, so do this out of our fast path

View file

@ -1314,7 +1314,7 @@ atombios_dig_transmitter_setup(struct drm_encoder *encoder, int action, uint8_t
}
if (is_dp)
args.v5.ucLaneNum = dp_lane_count;
else if (radeon_encoder->pixel_clock > 165000)
else if (radeon_dig_monitor_is_duallink(encoder, radeon_encoder->pixel_clock))
args.v5.ucLaneNum = 8;
else
args.v5.ucLaneNum = 4;

View file

@ -3046,7 +3046,7 @@ static u32 cik_create_bitmask(u32 bit_width)
}
/**
* cik_select_se_sh - select which SE, SH to address
* cik_get_rb_disabled - computes the mask of disabled RBs
*
* @rdev: radeon_device pointer
* @max_rb_num: max RBs (render backends) for the asic
@ -4134,8 +4134,11 @@ static void cik_cp_compute_enable(struct radeon_device *rdev, bool enable)
{
if (enable)
WREG32(CP_MEC_CNTL, 0);
else
else {
WREG32(CP_MEC_CNTL, (MEC_ME1_HALT | MEC_ME2_HALT));
rdev->ring[CAYMAN_RING_TYPE_CP1_INDEX].ready = false;
rdev->ring[CAYMAN_RING_TYPE_CP2_INDEX].ready = false;
}
udelay(50);
}
@ -7902,7 +7905,8 @@ int cik_resume(struct radeon_device *rdev)
/* init golden registers */
cik_init_golden_registers(rdev);
radeon_pm_resume(rdev);
if (rdev->pm.pm_method == PM_METHOD_DPM)
radeon_pm_resume(rdev);
rdev->accel_working = true;
r = cik_startup(rdev);

View file

@ -264,6 +264,8 @@ static void cik_sdma_gfx_stop(struct radeon_device *rdev)
WREG32(SDMA0_GFX_RB_CNTL + reg_offset, rb_cntl);
WREG32(SDMA0_GFX_IB_CNTL + reg_offset, 0);
}
rdev->ring[R600_RING_TYPE_DMA_INDEX].ready = false;
rdev->ring[CAYMAN_RING_TYPE_DMA1_INDEX].ready = false;
}
/**
@ -291,6 +293,11 @@ void cik_sdma_enable(struct radeon_device *rdev, bool enable)
u32 me_cntl, reg_offset;
int i;
if (enable == false) {
cik_sdma_gfx_stop(rdev);
cik_sdma_rlc_stop(rdev);
}
for (i = 0; i < 2; i++) {
if (i == 0)
reg_offset = SDMA0_REGISTER_OFFSET;
@ -420,10 +427,6 @@ static int cik_sdma_load_microcode(struct radeon_device *rdev)
if (!rdev->sdma_fw)
return -EINVAL;
/* stop the gfx rings and rlc compute queues */
cik_sdma_gfx_stop(rdev);
cik_sdma_rlc_stop(rdev);
/* halt the MEs */
cik_sdma_enable(rdev, false);
@ -492,9 +495,6 @@ int cik_sdma_resume(struct radeon_device *rdev)
*/
void cik_sdma_fini(struct radeon_device *rdev)
{
/* stop the gfx rings and rlc compute queues */
cik_sdma_gfx_stop(rdev);
cik_sdma_rlc_stop(rdev);
/* halt the MEs */
cik_sdma_enable(rdev, false);
radeon_ring_fini(rdev, &rdev->ring[R600_RING_TYPE_DMA_INDEX]);

View file

@ -5299,7 +5299,8 @@ int evergreen_resume(struct radeon_device *rdev)
/* init golden registers */
evergreen_init_golden_registers(rdev);
radeon_pm_resume(rdev);
if (rdev->pm.pm_method == PM_METHOD_DPM)
radeon_pm_resume(rdev);
rdev->accel_working = true;
r = evergreen_startup(rdev);

View file

@ -57,7 +57,7 @@ typedef struct SMC_Evergreen_MCRegisters SMC_Evergreen_MCRegisters;
#define EVERGREEN_SMC_FIRMWARE_HEADER_LOCATION 0x100
#define EVERGREEN_SMC_FIRMWARE_HEADER_softRegisters 0x0
#define EVERGREEN_SMC_FIRMWARE_HEADER_softRegisters 0x8
#define EVERGREEN_SMC_FIRMWARE_HEADER_stateTable 0xC
#define EVERGREEN_SMC_FIRMWARE_HEADER_mcRegisterTable 0x20

View file

@ -2105,7 +2105,8 @@ int cayman_resume(struct radeon_device *rdev)
/* init golden registers */
ni_init_golden_registers(rdev);
radeon_pm_resume(rdev);
if (rdev->pm.pm_method == PM_METHOD_DPM)
radeon_pm_resume(rdev);
rdev->accel_working = true;
r = cayman_startup(rdev);

View file

@ -3942,8 +3942,6 @@ int r100_resume(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);
radeon_pm_resume(rdev);
rdev->accel_working = true;
r = r100_startup(rdev);
if (r) {

View file

@ -1430,8 +1430,6 @@ int r300_resume(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);
radeon_pm_resume(rdev);
rdev->accel_working = true;
r = r300_startup(rdev);
if (r) {

View file

@ -325,8 +325,6 @@ int r420_resume(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);
radeon_pm_resume(rdev);
rdev->accel_working = true;
r = r420_startup(rdev);
if (r) {

View file

@ -240,8 +240,6 @@ int r520_resume(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);
radeon_pm_resume(rdev);
rdev->accel_working = true;
r = r520_startup(rdev);
if (r) {

View file

@ -2968,7 +2968,8 @@ int r600_resume(struct radeon_device *rdev)
/* post card */
atom_asic_init(rdev->mode_info.atom_context);
radeon_pm_resume(rdev);
if (rdev->pm.pm_method == PM_METHOD_DPM)
radeon_pm_resume(rdev);
rdev->accel_working = true;
r = r600_startup(rdev);

View file

@ -1521,13 +1521,16 @@ int radeon_resume_kms(struct drm_device *dev, bool resume, bool fbcon)
if (r)
DRM_ERROR("ib ring test failed (%d).\n", r);
if (rdev->pm.dpm_enabled) {
if ((rdev->pm.pm_method == PM_METHOD_DPM) && rdev->pm.dpm_enabled) {
/* do dpm late init */
r = radeon_pm_late_init(rdev);
if (r) {
rdev->pm.dpm_enabled = false;
DRM_ERROR("radeon_pm_late_init failed, disabling dpm\n");
}
} else {
/* resume old pm late */
radeon_pm_resume(rdev);
}
radeon_restore_bios_scratch_regs(rdev);

View file

@ -33,6 +33,13 @@
#include <linux/vga_switcheroo.h>
#include <linux/slab.h>
#include <linux/pm_runtime.h>
#if defined(CONFIG_VGA_SWITCHEROO)
bool radeon_is_px(void);
#else
static inline bool radeon_is_px(void) { return false; }
#endif
/**
* radeon_driver_unload_kms - Main unload function for KMS.
*
@ -130,7 +137,8 @@ int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags)
"Error during ACPI methods call\n");
}
if (radeon_runtime_pm != 0) {
if ((radeon_runtime_pm == 1) ||
((radeon_runtime_pm == -1) && radeon_is_px())) {
pm_runtime_use_autosuspend(dev->dev);
pm_runtime_set_autosuspend_delay(dev->dev, 5000);
pm_runtime_set_active(dev->dev);

View file

@ -714,6 +714,9 @@ int radeon_ttm_init(struct radeon_device *rdev)
DRM_ERROR("Failed initializing VRAM heap.\n");
return r;
}
/* Change the size here instead of the init above so only lpfn is affected */
radeon_ttm_set_active_vram_size(rdev, rdev->mc.visible_vram_size);
r = radeon_bo_create(rdev, 256 * 1024, PAGE_SIZE, true,
RADEON_GEM_DOMAIN_VRAM,
NULL, &rdev->stollen_vga_memory);
@ -935,7 +938,7 @@ static ssize_t radeon_ttm_gtt_read(struct file *f, char __user *buf,
while (size) {
loff_t p = *pos / PAGE_SIZE;
unsigned off = *pos & ~PAGE_MASK;
ssize_t cur_size = min(size, PAGE_SIZE - off);
size_t cur_size = min_t(size_t, size, PAGE_SIZE - off);
struct page *page;
void *ptr;

View file

@ -474,8 +474,6 @@ int rs400_resume(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);
radeon_pm_resume(rdev);
rdev->accel_working = true;
r = rs400_startup(rdev);
if (r) {

View file

@ -1048,8 +1048,6 @@ int rs600_resume(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);
radeon_pm_resume(rdev);
rdev->accel_working = true;
r = rs600_startup(rdev);
if (r) {

View file

@ -756,8 +756,6 @@ int rs690_resume(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);
radeon_pm_resume(rdev);
rdev->accel_working = true;
r = rs690_startup(rdev);
if (r) {

View file

@ -586,8 +586,6 @@ int rv515_resume(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);
radeon_pm_resume(rdev);
rdev->accel_working = true;
r = rv515_startup(rdev);
if (r) {

View file

@ -1811,7 +1811,8 @@ int rv770_resume(struct radeon_device *rdev)
/* init golden registers */
rv770_init_golden_registers(rdev);
radeon_pm_resume(rdev);
if (rdev->pm.pm_method == PM_METHOD_DPM)
radeon_pm_resume(rdev);
rdev->accel_working = true;
r = rv770_startup(rdev);

View file

@ -6618,7 +6618,8 @@ int si_resume(struct radeon_device *rdev)
/* init golden registers */
si_init_golden_registers(rdev);
radeon_pm_resume(rdev);
if (rdev->pm.pm_method == PM_METHOD_DPM)
radeon_pm_resume(rdev);
rdev->accel_working = true;
r = si_startup(rdev);

View file

@ -351,9 +351,11 @@ static int ttm_bo_handle_move_mem(struct ttm_buffer_object *bo,
moved:
if (bo->evicted) {
ret = bdev->driver->invalidate_caches(bdev, bo->mem.placement);
if (ret)
pr_err("Can not flush read caches\n");
if (bdev->driver->invalidate_caches) {
ret = bdev->driver->invalidate_caches(bdev, bo->mem.placement);
if (ret)
pr_err("Can not flush read caches\n");
}
bo->evicted = false;
}

View file

@ -339,11 +339,13 @@ int ttm_bo_mmap(struct file *filp, struct vm_area_struct *vma,
vma->vm_private_data = bo;
/*
* PFNMAP is faster than MIXEDMAP due to reduced page
* administration. So use MIXEDMAP only if private VMA, where
* we need to support COW.
* We'd like to use VM_PFNMAP on shared mappings, where
* (vma->vm_flags & VM_SHARED) != 0, for performance reasons,
* but for some reason VM_PFNMAP + x86 PAT + write-combine is very
* bad for performance. Until that has been sorted out, use
* VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719
*/
vma->vm_flags |= (vma->vm_flags & VM_SHARED) ? VM_PFNMAP : VM_MIXEDMAP;
vma->vm_flags |= VM_MIXEDMAP;
vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
return 0;
out_unref:
@ -359,7 +361,7 @@ int ttm_fbdev_mmap(struct vm_area_struct *vma, struct ttm_buffer_object *bo)
vma->vm_ops = &ttm_bo_vm_ops;
vma->vm_private_data = ttm_bo_reference(bo);
vma->vm_flags |= (vma->vm_flags & VM_SHARED) ? VM_PFNMAP : VM_MIXEDMAP;
vma->vm_flags |= VM_MIXEDMAP;
vma->vm_flags |= VM_IO | VM_DONTEXPAND;
return 0;
}

View file

@ -830,6 +830,24 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
if (unlikely(ret != 0))
goto out_unlock;
/*
* A gb-aware client referencing a shared surface will
* expect a backup buffer to be present.
*/
if (dev_priv->has_mob && req->shareable) {
uint32_t backup_handle;
ret = vmw_user_dmabuf_alloc(dev_priv, tfile,
res->backup_size,
true,
&backup_handle,
&res->backup);
if (unlikely(ret != 0)) {
vmw_resource_unreference(&res);
goto out_unlock;
}
}
tmp = vmw_resource_reference(&srf->res);
ret = ttm_prime_object_init(tfile, res->backup_size, &user_srf->prime,
req->shareable, VMW_RES_SURFACE,

Some files were not shown because too many files have changed in this diff Show more