Chris is doing many reworks that allow us to get full-ppgtt supported

on all platforms back to HSW. As well many other fix and improvements,
 Including:
 - Use GEM suspend when aborting initialization (Chris)
 - Change i915_gem_fault to return vm_fault_t (Chris)
 - Expand VMA to Non gem object entities (Chris)
 - Improve logs for load failure, but quite logging on fault injection to avoid noise on CI (Chris)
 - Other page directory handling fixes and improvements for gen6 (Chris)
 - Other gtt clean-up removing redundancies and unused checks (Chris)
 - Reorder aliasing ppgtt fini (Chris)
 - Refactor of unsetting obg->mm.pages (Chris)
 - Apply batch location restrictions before pinning (Chris)
 - Ringbuffer fixes for context restore (Chris)
 - Execlist fixes on freeing error pointer on allocation error (Chris)
 - Make closing request flush mandatory (Chris)
 - Move GEM sanitize from resume_early to resume (Chris)
 - Improve debug dumps (Chris)
 - Silent compiler for selftest (Chris)
 - Other execlists changes to improve hangcheck and reset.
 - Many gtt page directory fixes and improvements (Chris)
 - Reorg context workarounds (Chris)
 - Avoid ERR_PTR dereference on selftest (Chris)
 
 Other GEM related work:
 - Stop trying to reset GPU if reset failed (Mika)
 - Add HW workaround for KBL to fix GPU reset (Mika)
 - Fix context ban and hang accounting for client (Mika)
 - Fixes on OA perf (Michel, Jani)
 - Refactor on GuC log mechanisms (Piotr)
 - Enable provoking vertex fix on Gen9 system (Kenneth)
 
 More ICL patches for Display enabling:
 - ICL - 10-bit support for HDMI (RK)
 - ICL - Start adding TBT PLL (Paulo)
 - ICL - DDI HDMK level selection (Manasi)
 - ICL - GMBUS GPIO pin mapping fix (Mahesh)
 - ICL - Adding DP_AUX_E support (James)
 - ICL - Display interrupts handling (DK)
 
 Other display fixes and improvements:
 - Fix sprite destination color keying on SKL+ (Ville)
 - Fixes and improvements on PCH detection, specially for non PCH systems (Jani)
 - Document PCH_NOP (Lucas)
 - Allow DBLSCAN user modes with eDP/LVDS/DSI (Ville)
 - Opregion and ACPI cleanup and organization (Jani)
 - Kill delays when activation psr (Rodrigo)
 - ...and a consequent fix of the psr activation flow (DK)
 - Fix HDMI infoframe setting (Imre)
 - Fix Display interrupts and modes on old gens (Ville)
 - Start switching to kernel unsigned int types (Jani)
 - Introduction to Amber Lake and Whiskey Lake platforms (Jose)
 - Audio clock fixes for HBR3 (RK)
 - Standardize i915_reg.h definitions according to our doc and checkpatch (Paulo)
 - Remove unused timespec_to_jiffies_timeout function (Arnd)
 - Increase the scope of PSR wake fix for other VBTs out there (Vathsala)
 - Improve debug msgs with prop name/id (Ville)
 - Other clean up on unecessary cursor size defines (Ville)
 - Enforce max hdisplay/hblank_start limits on HSW/BDW (Ville)
 - Make ELD pointers constant (Jani)
 - Fix for PSR VBT parse (Colin)
 - Add warn about unsupported CDCLK rates (Imre)
 -----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJbKsMqAAoJEPpiX2QO6xPKI64H/0dHkMxw7/D83eODTJteDFBN
 h3tdBnLFlPfeG3ZWDeSs04/dM4e9YacMN7v53j1ia4eW/F1ms0TLcegcuPqYafTW
 H8fhwGB2B5gmr5hLfh5joQkxvaucQMFdg95fWRqir93VrKvVJAJEYNcaiGniejDf
 qqiZue6DgAzli0zjAprfbQsnJ17TyRtnxm8lLIcFcHPoayHBzAUBZQEP6cA5qe/Y
 /2ahGfkYOVVWY08DHaioDBOLUEUbxCC1AvMlv9VbtKmyPoQjTIW/1iTq0RRxDoGb
 BwfDvigSiFAmpYEfVENB0qUd9e/0WhMboSnMrfzEcF2yUn4xoJx5nbmkRFkr1jI=
 =mfO6
 -----END PGP SIGNATURE-----

Merge tag 'drm-intel-next-2018-06-20' of git://anongit.freedesktop.org/drm/drm-intel into drm-next

Chris is doing many reworks that allow us to get full-ppgtt supported
on all platforms back to HSW. As well many other fix and improvements,
Including:
- Use GEM suspend when aborting initialization (Chris)
- Change i915_gem_fault to return vm_fault_t (Chris)
- Expand VMA to Non gem object entities (Chris)
- Improve logs for load failure, but quite logging on fault injection to avoid noise on CI (Chris)
- Other page directory handling fixes and improvements for gen6 (Chris)
- Other gtt clean-up removing redundancies and unused checks (Chris)
- Reorder aliasing ppgtt fini (Chris)
- Refactor of unsetting obg->mm.pages (Chris)
- Apply batch location restrictions before pinning (Chris)
- Ringbuffer fixes for context restore (Chris)
- Execlist fixes on freeing error pointer on allocation error (Chris)
- Make closing request flush mandatory (Chris)
- Move GEM sanitize from resume_early to resume (Chris)
- Improve debug dumps (Chris)
- Silent compiler for selftest (Chris)
- Other execlists changes to improve hangcheck and reset.
- Many gtt page directory fixes and improvements (Chris)
- Reorg context workarounds (Chris)
- Avoid ERR_PTR dereference on selftest (Chris)

Other GEM related work:
- Stop trying to reset GPU if reset failed (Mika)
- Add HW workaround for KBL to fix GPU reset (Mika)
- Fix context ban and hang accounting for client (Mika)
- Fixes on OA perf (Michel, Jani)
- Refactor on GuC log mechanisms (Piotr)
- Enable provoking vertex fix on Gen9 system (Kenneth)

More ICL patches for Display enabling:
- ICL - 10-bit support for HDMI (RK)
- ICL - Start adding TBT PLL (Paulo)
- ICL - DDI HDMK level selection (Manasi)
- ICL - GMBUS GPIO pin mapping fix (Mahesh)
- ICL - Adding DP_AUX_E support (James)
- ICL - Display interrupts handling (DK)

Other display fixes and improvements:
- Fix sprite destination color keying on SKL+ (Ville)
- Fixes and improvements on PCH detection, specially for non PCH systems (Jani)
- Document PCH_NOP (Lucas)
- Allow DBLSCAN user modes with eDP/LVDS/DSI (Ville)
- Opregion and ACPI cleanup and organization (Jani)
- Kill delays when activation psr (Rodrigo)
- ...and a consequent fix of the psr activation flow (DK)
- Fix HDMI infoframe setting (Imre)
- Fix Display interrupts and modes on old gens (Ville)
- Start switching to kernel unsigned int types (Jani)
- Introduction to Amber Lake and Whiskey Lake platforms (Jose)
- Audio clock fixes for HBR3 (RK)
- Standardize i915_reg.h definitions according to our doc and checkpatch (Paulo)
- Remove unused timespec_to_jiffies_timeout function (Arnd)
- Increase the scope of PSR wake fix for other VBTs out there (Vathsala)
- Improve debug msgs with prop name/id (Ville)
- Other clean up on unecessary cursor size defines (Ville)
- Enforce max hdisplay/hblank_start limits on HSW/BDW (Ville)
- Make ELD pointers constant (Jani)
- Fix for PSR VBT parse (Colin)
- Add warn about unsupported CDCLK rates (Imre)

Signed-off-by: Dave Airlie <airlied@redhat.com>

# gpg: Signature made Thu 21 Jun 2018 07:12:10 AM AEST
# gpg:                using RSA key FA625F640EEB13CA
# gpg: Good signature from "Rodrigo Vivi <rodrigo.vivi@intel.com>"
# gpg:                 aka "Rodrigo Vivi <rodrigo.vivi@gmail.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6D20 7068 EEDD 6509 1C2C  E2A3 FA62 5F64 0EEB 13CA
Link: https://patchwork.freedesktop.org/patch/msgid/20180625165622.GA21761@intel.com
This commit is contained in:
Dave Airlie 2018-06-28 13:10:37 +10:00
commit b4d4b0b7de
100 changed files with 4461 additions and 3095 deletions

View file

@ -159,7 +159,7 @@
#define CH7017_BANG_LIMIT_CONTROL 0x7f
struct ch7017_priv {
uint8_t dummy;
u8 dummy;
};
static void ch7017_dump_regs(struct intel_dvo_device *dvo);
@ -186,7 +186,7 @@ static bool ch7017_read(struct intel_dvo_device *dvo, u8 addr, u8 *val)
static bool ch7017_write(struct intel_dvo_device *dvo, u8 addr, u8 val)
{
uint8_t buf[2] = { addr, val };
u8 buf[2] = { addr, val };
struct i2c_msg msg = {
.addr = dvo->slave_addr,
.flags = 0,
@ -258,11 +258,11 @@ static void ch7017_mode_set(struct intel_dvo_device *dvo,
const struct drm_display_mode *mode,
const struct drm_display_mode *adjusted_mode)
{
uint8_t lvds_pll_feedback_div, lvds_pll_vco_control;
uint8_t outputs_enable, lvds_control_2, lvds_power_down;
uint8_t horizontal_active_pixel_input;
uint8_t horizontal_active_pixel_output, vertical_active_line_output;
uint8_t active_input_line_output;
u8 lvds_pll_feedback_div, lvds_pll_vco_control;
u8 outputs_enable, lvds_control_2, lvds_power_down;
u8 horizontal_active_pixel_input;
u8 horizontal_active_pixel_output, vertical_active_line_output;
u8 active_input_line_output;
DRM_DEBUG_KMS("Registers before mode setting\n");
ch7017_dump_regs(dvo);
@ -333,7 +333,7 @@ static void ch7017_mode_set(struct intel_dvo_device *dvo,
/* set the CH7017 power state */
static void ch7017_dpms(struct intel_dvo_device *dvo, bool enable)
{
uint8_t val;
u8 val;
ch7017_read(dvo, CH7017_LVDS_POWER_DOWN, &val);
@ -361,7 +361,7 @@ static void ch7017_dpms(struct intel_dvo_device *dvo, bool enable)
static bool ch7017_get_hw_state(struct intel_dvo_device *dvo)
{
uint8_t val;
u8 val;
ch7017_read(dvo, CH7017_LVDS_POWER_DOWN, &val);
@ -373,7 +373,7 @@ static bool ch7017_get_hw_state(struct intel_dvo_device *dvo)
static void ch7017_dump_regs(struct intel_dvo_device *dvo)
{
uint8_t val;
u8 val;
#define DUMP(reg) \
do { \

View file

@ -85,7 +85,7 @@ SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
static struct ch7xxx_id_struct {
uint8_t vid;
u8 vid;
char *name;
} ch7xxx_ids[] = {
{ CH7011_VID, "CH7011" },
@ -96,7 +96,7 @@ static struct ch7xxx_id_struct {
};
static struct ch7xxx_did_struct {
uint8_t did;
u8 did;
char *name;
} ch7xxx_dids[] = {
{ CH7xxx_DID, "CH7XXX" },
@ -107,7 +107,7 @@ struct ch7xxx_priv {
bool quiet;
};
static char *ch7xxx_get_id(uint8_t vid)
static char *ch7xxx_get_id(u8 vid)
{
int i;
@ -119,7 +119,7 @@ static char *ch7xxx_get_id(uint8_t vid)
return NULL;
}
static char *ch7xxx_get_did(uint8_t did)
static char *ch7xxx_get_did(u8 did)
{
int i;
@ -132,7 +132,7 @@ static char *ch7xxx_get_did(uint8_t did)
}
/** Reads an 8 bit register */
static bool ch7xxx_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch)
static bool ch7xxx_readb(struct intel_dvo_device *dvo, int addr, u8 *ch)
{
struct ch7xxx_priv *ch7xxx = dvo->dev_priv;
struct i2c_adapter *adapter = dvo->i2c_bus;
@ -170,11 +170,11 @@ static bool ch7xxx_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch)
}
/** Writes an 8 bit register */
static bool ch7xxx_writeb(struct intel_dvo_device *dvo, int addr, uint8_t ch)
static bool ch7xxx_writeb(struct intel_dvo_device *dvo, int addr, u8 ch)
{
struct ch7xxx_priv *ch7xxx = dvo->dev_priv;
struct i2c_adapter *adapter = dvo->i2c_bus;
uint8_t out_buf[2];
u8 out_buf[2];
struct i2c_msg msg = {
.addr = dvo->slave_addr,
.flags = 0,
@ -201,7 +201,7 @@ static bool ch7xxx_init(struct intel_dvo_device *dvo,
{
/* this will detect the CH7xxx chip on the specified i2c bus */
struct ch7xxx_priv *ch7xxx;
uint8_t vendor, device;
u8 vendor, device;
char *name, *devid;
ch7xxx = kzalloc(sizeof(struct ch7xxx_priv), GFP_KERNEL);
@ -244,7 +244,7 @@ static bool ch7xxx_init(struct intel_dvo_device *dvo,
static enum drm_connector_status ch7xxx_detect(struct intel_dvo_device *dvo)
{
uint8_t cdet, orig_pm, pm;
u8 cdet, orig_pm, pm;
ch7xxx_readb(dvo, CH7xxx_PM, &orig_pm);
@ -276,7 +276,7 @@ static void ch7xxx_mode_set(struct intel_dvo_device *dvo,
const struct drm_display_mode *mode,
const struct drm_display_mode *adjusted_mode)
{
uint8_t tvco, tpcp, tpd, tlpf, idf;
u8 tvco, tpcp, tpd, tlpf, idf;
if (mode->clock <= 65000) {
tvco = 0x23;
@ -336,7 +336,7 @@ static void ch7xxx_dump_regs(struct intel_dvo_device *dvo)
int i;
for (i = 0; i < CH7xxx_NUM_REGS; i++) {
uint8_t val;
u8 val;
if ((i % 8) == 0)
DRM_DEBUG_KMS("\n %02X: ", i);
ch7xxx_readb(dvo, i, &val);

View file

@ -161,7 +161,7 @@
* instead. The following list contains all registers that
* require saving.
*/
static const uint16_t backup_addresses[] = {
static const u16 backup_addresses[] = {
0x11, 0x12,
0x18, 0x19, 0x1a, 0x1f,
0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27,
@ -174,11 +174,11 @@ static const uint16_t backup_addresses[] = {
struct ivch_priv {
bool quiet;
uint16_t width, height;
u16 width, height;
/* Register backup */
uint16_t reg_backup[ARRAY_SIZE(backup_addresses)];
u16 reg_backup[ARRAY_SIZE(backup_addresses)];
};
@ -188,7 +188,7 @@ static void ivch_dump_regs(struct intel_dvo_device *dvo);
*
* Each of the 256 registers are 16 bits long.
*/
static bool ivch_read(struct intel_dvo_device *dvo, int addr, uint16_t *data)
static bool ivch_read(struct intel_dvo_device *dvo, int addr, u16 *data)
{
struct ivch_priv *priv = dvo->dev_priv;
struct i2c_adapter *adapter = dvo->i2c_bus;
@ -231,7 +231,7 @@ static bool ivch_read(struct intel_dvo_device *dvo, int addr, uint16_t *data)
}
/* Writes a 16-bit register on the ivch */
static bool ivch_write(struct intel_dvo_device *dvo, int addr, uint16_t data)
static bool ivch_write(struct intel_dvo_device *dvo, int addr, u16 data)
{
struct ivch_priv *priv = dvo->dev_priv;
struct i2c_adapter *adapter = dvo->i2c_bus;
@ -263,7 +263,7 @@ static bool ivch_init(struct intel_dvo_device *dvo,
struct i2c_adapter *adapter)
{
struct ivch_priv *priv;
uint16_t temp;
u16 temp;
int i;
priv = kzalloc(sizeof(struct ivch_priv), GFP_KERNEL);
@ -342,7 +342,7 @@ static void ivch_reset(struct intel_dvo_device *dvo)
static void ivch_dpms(struct intel_dvo_device *dvo, bool enable)
{
int i;
uint16_t vr01, vr30, backlight;
u16 vr01, vr30, backlight;
ivch_reset(dvo);
@ -379,7 +379,7 @@ static void ivch_dpms(struct intel_dvo_device *dvo, bool enable)
static bool ivch_get_hw_state(struct intel_dvo_device *dvo)
{
uint16_t vr01;
u16 vr01;
ivch_reset(dvo);
@ -398,9 +398,9 @@ static void ivch_mode_set(struct intel_dvo_device *dvo,
const struct drm_display_mode *adjusted_mode)
{
struct ivch_priv *priv = dvo->dev_priv;
uint16_t vr40 = 0;
uint16_t vr01 = 0;
uint16_t vr10;
u16 vr40 = 0;
u16 vr01 = 0;
u16 vr10;
ivch_reset(dvo);
@ -416,7 +416,7 @@ static void ivch_mode_set(struct intel_dvo_device *dvo,
if (mode->hdisplay != adjusted_mode->crtc_hdisplay ||
mode->vdisplay != adjusted_mode->crtc_vdisplay) {
uint16_t x_ratio, y_ratio;
u16 x_ratio, y_ratio;
vr01 |= VR01_PANEL_FIT_ENABLE;
vr40 |= VR40_CLOCK_GATING_ENABLE;
@ -438,7 +438,7 @@ static void ivch_mode_set(struct intel_dvo_device *dvo,
static void ivch_dump_regs(struct intel_dvo_device *dvo)
{
uint16_t val;
u16 val;
ivch_read(dvo, VR00, &val);
DRM_DEBUG_KMS("VR00: 0x%04x\n", val);

View file

@ -191,8 +191,8 @@ enum {
};
struct ns2501_reg {
uint8_t offset;
uint8_t value;
u8 offset;
u8 value;
};
/*
@ -202,23 +202,23 @@ struct ns2501_reg {
* read all this with a grain of salt.
*/
struct ns2501_configuration {
uint8_t sync; /* configuration of the C0 register */
uint8_t conf; /* configuration register 8 */
uint8_t syncb; /* configuration register 41 */
uint8_t dither; /* configuration of the dithering */
uint8_t pll_a; /* PLL configuration, register A, 1B */
uint16_t pll_b; /* PLL configuration, register B, 1C/1D */
uint16_t hstart; /* horizontal start, registers C1/C2 */
uint16_t hstop; /* horizontal total, registers C3/C4 */
uint16_t vstart; /* vertical start, registers C5/C6 */
uint16_t vstop; /* vertical total, registers C7/C8 */
uint16_t vsync; /* manual vertical sync start, 80/81 */
uint16_t vtotal; /* number of lines generated, 82/83 */
uint16_t hpos; /* horizontal position + 256, 98/99 */
uint16_t vpos; /* vertical position, 8e/8f */
uint16_t voffs; /* vertical output offset, 9c/9d */
uint16_t hscale; /* horizontal scaling factor, b8/b9 */
uint16_t vscale; /* vertical scaling factor, 10/11 */
u8 sync; /* configuration of the C0 register */
u8 conf; /* configuration register 8 */
u8 syncb; /* configuration register 41 */
u8 dither; /* configuration of the dithering */
u8 pll_a; /* PLL configuration, register A, 1B */
u16 pll_b; /* PLL configuration, register B, 1C/1D */
u16 hstart; /* horizontal start, registers C1/C2 */
u16 hstop; /* horizontal total, registers C3/C4 */
u16 vstart; /* vertical start, registers C5/C6 */
u16 vstop; /* vertical total, registers C7/C8 */
u16 vsync; /* manual vertical sync start, 80/81 */
u16 vtotal; /* number of lines generated, 82/83 */
u16 hpos; /* horizontal position + 256, 98/99 */
u16 vpos; /* vertical position, 8e/8f */
u16 voffs; /* vertical output offset, 9c/9d */
u16 hscale; /* horizontal scaling factor, b8/b9 */
u16 vscale; /* vertical scaling factor, 10/11 */
};
/*
@ -389,7 +389,7 @@ struct ns2501_priv {
** If it returns false, it might be wise to enable the
** DVO with the above function.
*/
static bool ns2501_readb(struct intel_dvo_device *dvo, int addr, uint8_t * ch)
static bool ns2501_readb(struct intel_dvo_device *dvo, int addr, u8 *ch)
{
struct ns2501_priv *ns = dvo->dev_priv;
struct i2c_adapter *adapter = dvo->i2c_bus;
@ -434,11 +434,11 @@ static bool ns2501_readb(struct intel_dvo_device *dvo, int addr, uint8_t * ch)
** If it returns false, it might be wise to enable the
** DVO with the above function.
*/
static bool ns2501_writeb(struct intel_dvo_device *dvo, int addr, uint8_t ch)
static bool ns2501_writeb(struct intel_dvo_device *dvo, int addr, u8 ch)
{
struct ns2501_priv *ns = dvo->dev_priv;
struct i2c_adapter *adapter = dvo->i2c_bus;
uint8_t out_buf[2];
u8 out_buf[2];
struct i2c_msg msg = {
.addr = dvo->slave_addr,

View file

@ -65,7 +65,7 @@ struct sil164_priv {
#define SILPTR(d) ((SIL164Ptr)(d->DriverPrivate.ptr))
static bool sil164_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch)
static bool sil164_readb(struct intel_dvo_device *dvo, int addr, u8 *ch)
{
struct sil164_priv *sil = dvo->dev_priv;
struct i2c_adapter *adapter = dvo->i2c_bus;
@ -102,11 +102,11 @@ static bool sil164_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch)
return false;
}
static bool sil164_writeb(struct intel_dvo_device *dvo, int addr, uint8_t ch)
static bool sil164_writeb(struct intel_dvo_device *dvo, int addr, u8 ch)
{
struct sil164_priv *sil = dvo->dev_priv;
struct i2c_adapter *adapter = dvo->i2c_bus;
uint8_t out_buf[2];
u8 out_buf[2];
struct i2c_msg msg = {
.addr = dvo->slave_addr,
.flags = 0,
@ -173,7 +173,7 @@ static bool sil164_init(struct intel_dvo_device *dvo,
static enum drm_connector_status sil164_detect(struct intel_dvo_device *dvo)
{
uint8_t reg9;
u8 reg9;
sil164_readb(dvo, SIL164_REG9, &reg9);
@ -243,7 +243,7 @@ static bool sil164_get_hw_state(struct intel_dvo_device *dvo)
static void sil164_dump_regs(struct intel_dvo_device *dvo)
{
uint8_t val;
u8 val;
sil164_readb(dvo, SIL164_FREQ_LO, &val);
DRM_DEBUG_KMS("SIL164_FREQ_LO: 0x%02x\n", val);

View file

@ -90,7 +90,7 @@ struct tfp410_priv {
bool quiet;
};
static bool tfp410_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch)
static bool tfp410_readb(struct intel_dvo_device *dvo, int addr, u8 *ch)
{
struct tfp410_priv *tfp = dvo->dev_priv;
struct i2c_adapter *adapter = dvo->i2c_bus;
@ -127,11 +127,11 @@ static bool tfp410_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch)
return false;
}
static bool tfp410_writeb(struct intel_dvo_device *dvo, int addr, uint8_t ch)
static bool tfp410_writeb(struct intel_dvo_device *dvo, int addr, u8 ch)
{
struct tfp410_priv *tfp = dvo->dev_priv;
struct i2c_adapter *adapter = dvo->i2c_bus;
uint8_t out_buf[2];
u8 out_buf[2];
struct i2c_msg msg = {
.addr = dvo->slave_addr,
.flags = 0,
@ -155,7 +155,7 @@ static bool tfp410_writeb(struct intel_dvo_device *dvo, int addr, uint8_t ch)
static int tfp410_getid(struct intel_dvo_device *dvo, int addr)
{
uint8_t ch1, ch2;
u8 ch1, ch2;
if (tfp410_readb(dvo, addr+0, &ch1) &&
tfp410_readb(dvo, addr+1, &ch2))
@ -203,7 +203,7 @@ static bool tfp410_init(struct intel_dvo_device *dvo,
static enum drm_connector_status tfp410_detect(struct intel_dvo_device *dvo)
{
enum drm_connector_status ret = connector_status_disconnected;
uint8_t ctl2;
u8 ctl2;
if (tfp410_readb(dvo, TFP410_CTL_2, &ctl2)) {
if (ctl2 & TFP410_CTL_2_RSEN)
@ -236,7 +236,7 @@ static void tfp410_mode_set(struct intel_dvo_device *dvo,
/* set the tfp410 power state */
static void tfp410_dpms(struct intel_dvo_device *dvo, bool enable)
{
uint8_t ctl1;
u8 ctl1;
if (!tfp410_readb(dvo, TFP410_CTL_1, &ctl1))
return;
@ -251,7 +251,7 @@ static void tfp410_dpms(struct intel_dvo_device *dvo, bool enable)
static bool tfp410_get_hw_state(struct intel_dvo_device *dvo)
{
uint8_t ctl1;
u8 ctl1;
if (!tfp410_readb(dvo, TFP410_CTL_1, &ctl1))
return false;
@ -264,7 +264,7 @@ static bool tfp410_get_hw_state(struct intel_dvo_device *dvo)
static void tfp410_dump_regs(struct intel_dvo_device *dvo)
{
uint8_t val, val2;
u8 val, val2;
tfp410_readb(dvo, TFP410_REV, &val);
DRM_DEBUG_KMS("TFP410_REV: 0x%02X\n", val);

View file

@ -172,6 +172,7 @@ struct decode_info {
#define OP_MEDIA_INTERFACE_DESCRIPTOR_LOAD OP_3D_MEDIA(0x2, 0x0, 0x2)
#define OP_MEDIA_GATEWAY_STATE OP_3D_MEDIA(0x2, 0x0, 0x3)
#define OP_MEDIA_STATE_FLUSH OP_3D_MEDIA(0x2, 0x0, 0x4)
#define OP_MEDIA_POOL_STATE OP_3D_MEDIA(0x2, 0x0, 0x5)
#define OP_MEDIA_OBJECT OP_3D_MEDIA(0x2, 0x1, 0x0)
#define OP_MEDIA_OBJECT_PRT OP_3D_MEDIA(0x2, 0x1, 0x2)
@ -1256,7 +1257,9 @@ static int gen8_check_mi_display_flip(struct parser_exec_state *s,
if (!info->async_flip)
return 0;
if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
if (IS_SKYLAKE(dev_priv)
|| IS_KABYLAKE(dev_priv)
|| IS_BROXTON(dev_priv)) {
stride = vgpu_vreg_t(s->vgpu, info->stride_reg) & GENMASK(9, 0);
tile = (vgpu_vreg_t(s->vgpu, info->ctrl_reg) &
GENMASK(12, 10)) >> 10;
@ -1284,7 +1287,9 @@ static int gen8_update_plane_mmio_from_mi_display_flip(
set_mask_bits(&vgpu_vreg_t(vgpu, info->surf_reg), GENMASK(31, 12),
info->surf_val << 12);
if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
if (IS_SKYLAKE(dev_priv)
|| IS_KABYLAKE(dev_priv)
|| IS_BROXTON(dev_priv)) {
set_mask_bits(&vgpu_vreg_t(vgpu, info->stride_reg), GENMASK(9, 0),
info->stride_val);
set_mask_bits(&vgpu_vreg_t(vgpu, info->ctrl_reg), GENMASK(12, 10),
@ -1308,7 +1313,9 @@ static int decode_mi_display_flip(struct parser_exec_state *s,
if (IS_BROADWELL(dev_priv))
return gen8_decode_mi_display_flip(s, info);
if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv))
if (IS_SKYLAKE(dev_priv)
|| IS_KABYLAKE(dev_priv)
|| IS_BROXTON(dev_priv))
return skl_decode_mi_display_flip(s, info);
return -ENODEV;
@ -1317,26 +1324,14 @@ static int decode_mi_display_flip(struct parser_exec_state *s,
static int check_mi_display_flip(struct parser_exec_state *s,
struct mi_display_flip_command_info *info)
{
struct drm_i915_private *dev_priv = s->vgpu->gvt->dev_priv;
if (IS_BROADWELL(dev_priv)
|| IS_SKYLAKE(dev_priv)
|| IS_KABYLAKE(dev_priv))
return gen8_check_mi_display_flip(s, info);
return -ENODEV;
return gen8_check_mi_display_flip(s, info);
}
static int update_plane_mmio_from_mi_display_flip(
struct parser_exec_state *s,
struct mi_display_flip_command_info *info)
{
struct drm_i915_private *dev_priv = s->vgpu->gvt->dev_priv;
if (IS_BROADWELL(dev_priv)
|| IS_SKYLAKE(dev_priv)
|| IS_KABYLAKE(dev_priv))
return gen8_update_plane_mmio_from_mi_display_flip(s, info);
return -ENODEV;
return gen8_update_plane_mmio_from_mi_display_flip(s, info);
}
static int cmd_handler_mi_display_flip(struct parser_exec_state *s)
@ -1615,15 +1610,10 @@ static int copy_gma_to_hva(struct intel_vgpu *vgpu, struct intel_vgpu_mm *mm,
*/
static int batch_buffer_needs_scan(struct parser_exec_state *s)
{
struct intel_gvt *gvt = s->vgpu->gvt;
if (IS_BROADWELL(gvt->dev_priv) || IS_SKYLAKE(gvt->dev_priv)
|| IS_KABYLAKE(gvt->dev_priv)) {
/* BDW decides privilege based on address space */
if (cmd_val(s, 0) & (1 << 8) &&
/* Decide privilege based on address space */
if (cmd_val(s, 0) & (1 << 8) &&
!(s->vgpu->scan_nonprivbb & (1 << s->ring_id)))
return 0;
}
return 0;
return 1;
}
@ -2349,6 +2339,9 @@ static struct cmd_info cmd_info[] = {
{"MEDIA_STATE_FLUSH", OP_MEDIA_STATE_FLUSH, F_LEN_VAR, R_RCS, D_ALL,
0, 16, NULL},
{"MEDIA_POOL_STATE", OP_MEDIA_POOL_STATE, F_LEN_VAR, R_RCS, D_ALL,
0, 16, NULL},
{"MEDIA_OBJECT", OP_MEDIA_OBJECT, F_LEN_VAR, R_RCS, D_ALL, 0, 16, NULL},
{"MEDIA_CURBE_LOAD", OP_MEDIA_CURBE_LOAD, F_LEN_VAR, R_RCS, D_ALL,

View file

@ -171,6 +171,29 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu)
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
int pipe;
if (IS_BROXTON(dev_priv)) {
vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= ~(BXT_DE_PORT_HP_DDIA |
BXT_DE_PORT_HP_DDIB |
BXT_DE_PORT_HP_DDIC);
if (intel_vgpu_has_monitor_on_port(vgpu, PORT_A)) {
vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
BXT_DE_PORT_HP_DDIA;
}
if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) {
vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
BXT_DE_PORT_HP_DDIB;
}
if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) {
vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
BXT_DE_PORT_HP_DDIC;
}
return;
}
vgpu_vreg_t(vgpu, SDEISR) &= ~(SDE_PORTB_HOTPLUG_CPT |
SDE_PORTC_HOTPLUG_CPT |
SDE_PORTD_HOTPLUG_CPT);
@ -337,26 +360,28 @@ void intel_gvt_check_vblank_emulation(struct intel_gvt *gvt)
struct intel_gvt_irq *irq = &gvt->irq;
struct intel_vgpu *vgpu;
int pipe, id;
int found = false;
if (WARN_ON(!mutex_is_locked(&gvt->lock)))
return;
mutex_lock(&gvt->lock);
for_each_active_vgpu(gvt, vgpu, id) {
for (pipe = 0; pipe < I915_MAX_PIPES; pipe++) {
if (pipe_is_enabled(vgpu, pipe))
goto out;
if (pipe_is_enabled(vgpu, pipe)) {
found = true;
break;
}
}
if (found)
break;
}
/* all the pipes are disabled */
hrtimer_cancel(&irq->vblank_timer.timer);
return;
out:
hrtimer_start(&irq->vblank_timer.timer,
ktime_add_ns(ktime_get(), irq->vblank_timer.period),
HRTIMER_MODE_ABS);
if (!found)
hrtimer_cancel(&irq->vblank_timer.timer);
else
hrtimer_start(&irq->vblank_timer.timer,
ktime_add_ns(ktime_get(), irq->vblank_timer.period),
HRTIMER_MODE_ABS);
mutex_unlock(&gvt->lock);
}
static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int pipe)
@ -393,8 +418,10 @@ static void emulate_vblank(struct intel_vgpu *vgpu)
{
int pipe;
mutex_lock(&vgpu->vgpu_lock);
for_each_pipe(vgpu->gvt->dev_priv, pipe)
emulate_vblank_on_pipe(vgpu, pipe);
mutex_unlock(&vgpu->vgpu_lock);
}
/**
@ -409,11 +436,10 @@ void intel_gvt_emulate_vblank(struct intel_gvt *gvt)
struct intel_vgpu *vgpu;
int id;
if (WARN_ON(!mutex_is_locked(&gvt->lock)))
return;
mutex_lock(&gvt->lock);
for_each_active_vgpu(gvt, vgpu, id)
emulate_vblank(vgpu);
mutex_unlock(&gvt->lock);
}
/**

View file

@ -164,7 +164,9 @@ static struct drm_i915_gem_object *vgpu_create_gem(struct drm_device *dev,
obj->read_domains = I915_GEM_DOMAIN_GTT;
obj->write_domain = 0;
if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
if (IS_SKYLAKE(dev_priv)
|| IS_KABYLAKE(dev_priv)
|| IS_BROXTON(dev_priv)) {
unsigned int tiling_mode = 0;
unsigned int stride = 0;
@ -192,6 +194,14 @@ static struct drm_i915_gem_object *vgpu_create_gem(struct drm_device *dev,
return obj;
}
static bool validate_hotspot(struct intel_vgpu_cursor_plane_format *c)
{
if (c && c->x_hot <= c->width && c->y_hot <= c->height)
return true;
else
return false;
}
static int vgpu_get_plane_info(struct drm_device *dev,
struct intel_vgpu *vgpu,
struct intel_vgpu_fb_info *info,
@ -229,12 +239,14 @@ static int vgpu_get_plane_info(struct drm_device *dev,
info->x_pos = c.x_pos;
info->y_pos = c.y_pos;
/* The invalid cursor hotspot value is delivered to host
* until we find a way to get the cursor hotspot info of
* guest OS.
*/
info->x_hot = UINT_MAX;
info->y_hot = UINT_MAX;
if (validate_hotspot(&c)) {
info->x_hot = c.x_hot;
info->y_hot = c.y_hot;
} else {
info->x_hot = UINT_MAX;
info->y_hot = UINT_MAX;
}
info->size = (((info->stride * c.height * c.bpp) / 8)
+ (PAGE_SIZE - 1)) >> PAGE_SHIFT;
} else {

View file

@ -77,6 +77,20 @@ static unsigned char edid_get_byte(struct intel_vgpu *vgpu)
return chr;
}
static inline int bxt_get_port_from_gmbus0(u32 gmbus0)
{
int port_select = gmbus0 & _GMBUS_PIN_SEL_MASK;
int port = -EINVAL;
if (port_select == 1)
port = PORT_B;
else if (port_select == 2)
port = PORT_C;
else if (port_select == 3)
port = PORT_D;
return port;
}
static inline int get_port_from_gmbus0(u32 gmbus0)
{
int port_select = gmbus0 & _GMBUS_PIN_SEL_MASK;
@ -105,6 +119,7 @@ static void reset_gmbus_controller(struct intel_vgpu *vgpu)
static int gmbus0_mmio_write(struct intel_vgpu *vgpu,
unsigned int offset, void *p_data, unsigned int bytes)
{
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
int port, pin_select;
memcpy(&vgpu_vreg(vgpu, offset), p_data, bytes);
@ -116,7 +131,10 @@ static int gmbus0_mmio_write(struct intel_vgpu *vgpu,
if (pin_select == 0)
return 0;
port = get_port_from_gmbus0(pin_select);
if (IS_BROXTON(dev_priv))
port = bxt_get_port_from_gmbus0(pin_select);
else
port = get_port_from_gmbus0(pin_select);
if (WARN_ON(port < 0))
return 0;

View file

@ -146,14 +146,11 @@ struct execlist_ring_context {
u32 nop4;
u32 lri_cmd_2;
struct execlist_mmio_pair ctx_timestamp;
struct execlist_mmio_pair pdp3_UDW;
struct execlist_mmio_pair pdp3_LDW;
struct execlist_mmio_pair pdp2_UDW;
struct execlist_mmio_pair pdp2_LDW;
struct execlist_mmio_pair pdp1_UDW;
struct execlist_mmio_pair pdp1_LDW;
struct execlist_mmio_pair pdp0_UDW;
struct execlist_mmio_pair pdp0_LDW;
/*
* pdps[8]={ pdp3_UDW, pdp3_LDW, pdp2_UDW, pdp2_LDW,
* pdp1_UDW, pdp1_LDW, pdp0_UDW, pdp0_LDW}
*/
struct execlist_mmio_pair pdps[8];
};
struct intel_vgpu_elsp_dwords {

View file

@ -36,6 +36,7 @@
#include <uapi/drm/drm_fourcc.h>
#include "i915_drv.h"
#include "gvt.h"
#include "i915_pvinfo.h"
#define PRIMARY_FORMAT_NUM 16
struct pixel_format {
@ -150,7 +151,9 @@ static u32 intel_vgpu_get_stride(struct intel_vgpu *vgpu, int pipe,
u32 stride_reg = vgpu_vreg_t(vgpu, DSPSTRIDE(pipe)) & stride_mask;
u32 stride = stride_reg;
if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
if (IS_SKYLAKE(dev_priv)
|| IS_KABYLAKE(dev_priv)
|| IS_BROXTON(dev_priv)) {
switch (tiled) {
case PLANE_CTL_TILED_LINEAR:
stride = stride_reg * 64;
@ -214,7 +217,9 @@ int intel_vgpu_decode_primary_plane(struct intel_vgpu *vgpu,
if (!plane->enabled)
return -ENODEV;
if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
if (IS_SKYLAKE(dev_priv)
|| IS_KABYLAKE(dev_priv)
|| IS_BROXTON(dev_priv)) {
plane->tiled = (val & PLANE_CTL_TILED_MASK) >>
_PLANE_CTL_TILED_SHIFT;
fmt = skl_format_to_drm(
@ -256,7 +261,9 @@ int intel_vgpu_decode_primary_plane(struct intel_vgpu *vgpu,
}
plane->stride = intel_vgpu_get_stride(vgpu, pipe, (plane->tiled << 10),
(IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) ?
(IS_SKYLAKE(dev_priv)
|| IS_KABYLAKE(dev_priv)
|| IS_BROXTON(dev_priv)) ?
(_PRI_PLANE_STRIDE_MASK >> 6) :
_PRI_PLANE_STRIDE_MASK, plane->bpp);
@ -384,6 +391,8 @@ int intel_vgpu_decode_cursor_plane(struct intel_vgpu *vgpu,
plane->y_pos = (val & _CURSOR_POS_Y_MASK) >> _CURSOR_POS_Y_SHIFT;
plane->y_sign = (val & _CURSOR_SIGN_Y_MASK) >> _CURSOR_SIGN_Y_SHIFT;
plane->x_hot = vgpu_vreg_t(vgpu, vgtif_reg(cursor_x_hot));
plane->y_hot = vgpu_vreg_t(vgpu, vgtif_reg(cursor_y_hot));
return 0;
}

View file

@ -162,7 +162,7 @@ static int verify_firmware(struct intel_gvt *gvt,
h = (struct gvt_firmware_header *)fw->data;
crc32_start = offsetof(struct gvt_firmware_header, crc32) + 4;
crc32_start = offsetofend(struct gvt_firmware_header, crc32);
mem = fw->data + crc32_start;
#define VERIFY(s, a, b) do { \

View file

@ -1973,7 +1973,7 @@ static int alloc_scratch_pages(struct intel_vgpu *vgpu,
* GTT_TYPE_PPGTT_PDE_PT level pt, that means this scratch_pt it self
* is GTT_TYPE_PPGTT_PTE_PT, and full filled by scratch page mfn.
*/
if (type > GTT_TYPE_PPGTT_PTE_PT && type < GTT_TYPE_MAX) {
if (type > GTT_TYPE_PPGTT_PTE_PT) {
struct intel_gvt_gtt_entry se;
memset(&se, 0, sizeof(struct intel_gvt_gtt_entry));
@ -2257,13 +2257,8 @@ int intel_gvt_init_gtt(struct intel_gvt *gvt)
gvt_dbg_core("init gtt\n");
if (IS_BROADWELL(gvt->dev_priv) || IS_SKYLAKE(gvt->dev_priv)
|| IS_KABYLAKE(gvt->dev_priv)) {
gvt->gtt.pte_ops = &gen8_gtt_pte_ops;
gvt->gtt.gma_ops = &gen8_gtt_gma_ops;
} else {
return -ENODEV;
}
gvt->gtt.pte_ops = &gen8_gtt_pte_ops;
gvt->gtt.gma_ops = &gen8_gtt_gma_ops;
page = (void *)get_zeroed_page(GFP_KERNEL);
if (!page) {

View file

@ -238,18 +238,15 @@ static void init_device_info(struct intel_gvt *gvt)
struct intel_gvt_device_info *info = &gvt->device_info;
struct pci_dev *pdev = gvt->dev_priv->drm.pdev;
if (IS_BROADWELL(gvt->dev_priv) || IS_SKYLAKE(gvt->dev_priv)
|| IS_KABYLAKE(gvt->dev_priv)) {
info->max_support_vgpus = 8;
info->cfg_space_size = PCI_CFG_SPACE_EXP_SIZE;
info->mmio_size = 2 * 1024 * 1024;
info->mmio_bar = 0;
info->gtt_start_offset = 8 * 1024 * 1024;
info->gtt_entry_size = 8;
info->gtt_entry_size_shift = 3;
info->gmadr_bytes_in_cmd = 8;
info->max_surface_size = 36 * 1024 * 1024;
}
info->max_support_vgpus = 8;
info->cfg_space_size = PCI_CFG_SPACE_EXP_SIZE;
info->mmio_size = 2 * 1024 * 1024;
info->mmio_bar = 0;
info->gtt_start_offset = 8 * 1024 * 1024;
info->gtt_entry_size = 8;
info->gtt_entry_size_shift = 3;
info->gmadr_bytes_in_cmd = 8;
info->max_surface_size = 36 * 1024 * 1024;
info->msi_cap_offset = pdev->msi_cap;
}
@ -271,11 +268,8 @@ static int gvt_service_thread(void *data)
continue;
if (test_and_clear_bit(INTEL_GVT_REQUEST_EMULATE_VBLANK,
(void *)&gvt->service_request)) {
mutex_lock(&gvt->lock);
(void *)&gvt->service_request))
intel_gvt_emulate_vblank(gvt);
mutex_unlock(&gvt->lock);
}
if (test_bit(INTEL_GVT_REQUEST_SCHED,
(void *)&gvt->service_request) ||
@ -379,6 +373,7 @@ int intel_gvt_init_device(struct drm_i915_private *dev_priv)
idr_init(&gvt->vgpu_idr);
spin_lock_init(&gvt->scheduler.mmio_context_lock);
mutex_init(&gvt->lock);
mutex_init(&gvt->sched_lock);
gvt->dev_priv = dev_priv;
init_device_info(gvt);

View file

@ -170,12 +170,18 @@ struct intel_vgpu_submission {
struct intel_vgpu {
struct intel_gvt *gvt;
struct mutex vgpu_lock;
int id;
unsigned long handle; /* vGPU handle used by hypervisor MPT modules */
bool active;
bool pv_notified;
bool failsafe;
unsigned int resetting_eng;
/* Both sched_data and sched_ctl can be seen a part of the global gvt
* scheduler structure. So below 2 vgpu data are protected
* by sched_lock, not vgpu_lock.
*/
void *sched_data;
struct vgpu_sched_ctl sched_ctl;
@ -294,7 +300,13 @@ struct intel_vgpu_type {
};
struct intel_gvt {
/* GVT scope lock, protect GVT itself, and all resource currently
* not yet protected by special locks(vgpu and scheduler lock).
*/
struct mutex lock;
/* scheduler scope lock, protect gvt and vgpu schedule related data */
struct mutex sched_lock;
struct drm_i915_private *dev_priv;
struct idr vgpu_idr; /* vGPU IDR pool */
@ -314,6 +326,10 @@ struct intel_gvt {
struct task_struct *service_thread;
wait_queue_head_t service_thread_wq;
/* service_request is always used in bit operation, we should always
* use it with atomic bit ops so that no need to use gvt big lock.
*/
unsigned long service_request;
struct {

View file

@ -55,6 +55,8 @@ unsigned long intel_gvt_get_device_type(struct intel_gvt *gvt)
return D_SKL;
else if (IS_KABYLAKE(gvt->dev_priv))
return D_KBL;
else if (IS_BROXTON(gvt->dev_priv))
return D_BXT;
return 0;
}
@ -255,7 +257,8 @@ static int mul_force_wake_write(struct intel_vgpu *vgpu,
new = CALC_MODE_MASK_REG(old, *(u32 *)p_data);
if (IS_SKYLAKE(vgpu->gvt->dev_priv)
|| IS_KABYLAKE(vgpu->gvt->dev_priv)) {
|| IS_KABYLAKE(vgpu->gvt->dev_priv)
|| IS_BROXTON(vgpu->gvt->dev_priv)) {
switch (offset) {
case FORCEWAKE_RENDER_GEN9_REG:
ack_reg_offset = FORCEWAKE_ACK_RENDER_GEN9_REG;
@ -316,6 +319,7 @@ static int gdrst_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
}
}
/* vgpu_lock already hold by emulate mmio r/w */
intel_gvt_reset_vgpu_locked(vgpu, false, engine_mask);
/* sw will wait for the device to ack the reset request */
@ -420,7 +424,10 @@ static int pipeconf_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
vgpu_vreg(vgpu, offset) |= I965_PIPECONF_ACTIVE;
else
vgpu_vreg(vgpu, offset) &= ~I965_PIPECONF_ACTIVE;
/* vgpu_lock already hold by emulate mmio r/w */
mutex_unlock(&vgpu->vgpu_lock);
intel_gvt_check_vblank_emulation(vgpu->gvt);
mutex_lock(&vgpu->vgpu_lock);
return 0;
}
@ -857,7 +864,8 @@ static int dp_aux_ch_ctl_mmio_write(struct intel_vgpu *vgpu,
data = vgpu_vreg(vgpu, offset);
if ((IS_SKYLAKE(vgpu->gvt->dev_priv)
|| IS_KABYLAKE(vgpu->gvt->dev_priv))
|| IS_KABYLAKE(vgpu->gvt->dev_priv)
|| IS_BROXTON(vgpu->gvt->dev_priv))
&& offset != _REG_SKL_DP_AUX_CH_CTL(port_index)) {
/* SKL DPB/C/D aux ctl register changed */
return 0;
@ -1209,8 +1217,8 @@ static int pvinfo_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
ret = handle_g2v_notification(vgpu, data);
break;
/* add xhot and yhot to handled list to avoid error log */
case 0x78830:
case 0x78834:
case _vgtif_reg(cursor_x_hot):
case _vgtif_reg(cursor_y_hot):
case _vgtif_reg(pdp[0].lo):
case _vgtif_reg(pdp[0].hi):
case _vgtif_reg(pdp[1].lo):
@ -1369,6 +1377,16 @@ static int mailbox_write(struct intel_vgpu *vgpu, unsigned int offset,
*data0 = 0x1e1a1100;
else
*data0 = 0x61514b3d;
} else if (IS_BROXTON(vgpu->gvt->dev_priv)) {
/**
* "Read memory latency" command on gen9.
* Below memory latency values are read
* from Broxton MRB.
*/
if (!*data0)
*data0 = 0x16080707;
else
*data0 = 0x16161616;
}
break;
case SKL_PCODE_CDCLK_CONTROL:
@ -1426,8 +1444,11 @@ static int skl_power_well_ctl_write(struct intel_vgpu *vgpu,
{
u32 v = *(u32 *)p_data;
v &= (1 << 31) | (1 << 29) | (1 << 9) |
(1 << 7) | (1 << 5) | (1 << 3) | (1 << 1);
if (IS_BROXTON(vgpu->gvt->dev_priv))
v &= (1 << 31) | (1 << 29);
else
v &= (1 << 31) | (1 << 29) | (1 << 9) |
(1 << 7) | (1 << 5) | (1 << 3) | (1 << 1);
v |= (v >> 1);
return intel_vgpu_default_mmio_write(vgpu, offset, &v, bytes);
@ -1447,6 +1468,102 @@ static int skl_lcpll_write(struct intel_vgpu *vgpu, unsigned int offset,
return 0;
}
static int bxt_de_pll_enable_write(struct intel_vgpu *vgpu,
unsigned int offset, void *p_data, unsigned int bytes)
{
u32 v = *(u32 *)p_data;
if (v & BXT_DE_PLL_PLL_ENABLE)
v |= BXT_DE_PLL_LOCK;
vgpu_vreg(vgpu, offset) = v;
return 0;
}
static int bxt_port_pll_enable_write(struct intel_vgpu *vgpu,
unsigned int offset, void *p_data, unsigned int bytes)
{
u32 v = *(u32 *)p_data;
if (v & PORT_PLL_ENABLE)
v |= PORT_PLL_LOCK;
vgpu_vreg(vgpu, offset) = v;
return 0;
}
static int bxt_phy_ctl_family_write(struct intel_vgpu *vgpu,
unsigned int offset, void *p_data, unsigned int bytes)
{
u32 v = *(u32 *)p_data;
u32 data = v & COMMON_RESET_DIS ? BXT_PHY_LANE_ENABLED : 0;
vgpu_vreg(vgpu, _BXT_PHY_CTL_DDI_A) = data;
vgpu_vreg(vgpu, _BXT_PHY_CTL_DDI_B) = data;
vgpu_vreg(vgpu, _BXT_PHY_CTL_DDI_C) = data;
vgpu_vreg(vgpu, offset) = v;
return 0;
}
static int bxt_port_tx_dw3_read(struct intel_vgpu *vgpu,
unsigned int offset, void *p_data, unsigned int bytes)
{
u32 v = vgpu_vreg(vgpu, offset);
v &= ~UNIQUE_TRANGE_EN_METHOD;
vgpu_vreg(vgpu, offset) = v;
return intel_vgpu_default_mmio_read(vgpu, offset, p_data, bytes);
}
static int bxt_pcs_dw12_grp_write(struct intel_vgpu *vgpu,
unsigned int offset, void *p_data, unsigned int bytes)
{
u32 v = *(u32 *)p_data;
if (offset == _PORT_PCS_DW12_GRP_A || offset == _PORT_PCS_DW12_GRP_B) {
vgpu_vreg(vgpu, offset - 0x600) = v;
vgpu_vreg(vgpu, offset - 0x800) = v;
} else {
vgpu_vreg(vgpu, offset - 0x400) = v;
vgpu_vreg(vgpu, offset - 0x600) = v;
}
vgpu_vreg(vgpu, offset) = v;
return 0;
}
static int bxt_gt_disp_pwron_write(struct intel_vgpu *vgpu,
unsigned int offset, void *p_data, unsigned int bytes)
{
u32 v = *(u32 *)p_data;
if (v & BIT(0)) {
vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY0)) &=
~PHY_RESERVED;
vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY0)) |=
PHY_POWER_GOOD;
}
if (v & BIT(1)) {
vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY1)) &=
~PHY_RESERVED;
vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY1)) |=
PHY_POWER_GOOD;
}
vgpu_vreg(vgpu, offset) = v;
return 0;
}
static int mmio_read_from_hw(struct intel_vgpu *vgpu,
unsigned int offset, void *p_data, unsigned int bytes)
{
@ -2670,17 +2787,17 @@ static int init_skl_mmio_info(struct intel_gvt *gvt)
MMIO_D(_MMIO(0x45504), D_SKL_PLUS);
MMIO_D(_MMIO(0x45520), D_SKL_PLUS);
MMIO_D(_MMIO(0x46000), D_SKL_PLUS);
MMIO_DH(_MMIO(0x46010), D_SKL | D_KBL, NULL, skl_lcpll_write);
MMIO_DH(_MMIO(0x46014), D_SKL | D_KBL, NULL, skl_lcpll_write);
MMIO_D(_MMIO(0x6C040), D_SKL | D_KBL);
MMIO_D(_MMIO(0x6C048), D_SKL | D_KBL);
MMIO_D(_MMIO(0x6C050), D_SKL | D_KBL);
MMIO_D(_MMIO(0x6C044), D_SKL | D_KBL);
MMIO_D(_MMIO(0x6C04C), D_SKL | D_KBL);
MMIO_D(_MMIO(0x6C054), D_SKL | D_KBL);
MMIO_D(_MMIO(0x6c058), D_SKL | D_KBL);
MMIO_D(_MMIO(0x6c05c), D_SKL | D_KBL);
MMIO_DH(_MMIO(0x6c060), D_SKL | D_KBL, dpll_status_read, NULL);
MMIO_DH(_MMIO(0x46010), D_SKL_PLUS, NULL, skl_lcpll_write);
MMIO_DH(_MMIO(0x46014), D_SKL_PLUS, NULL, skl_lcpll_write);
MMIO_D(_MMIO(0x6C040), D_SKL_PLUS);
MMIO_D(_MMIO(0x6C048), D_SKL_PLUS);
MMIO_D(_MMIO(0x6C050), D_SKL_PLUS);
MMIO_D(_MMIO(0x6C044), D_SKL_PLUS);
MMIO_D(_MMIO(0x6C04C), D_SKL_PLUS);
MMIO_D(_MMIO(0x6C054), D_SKL_PLUS);
MMIO_D(_MMIO(0x6c058), D_SKL_PLUS);
MMIO_D(_MMIO(0x6c05c), D_SKL_PLUS);
MMIO_DH(_MMIO(0x6c060), D_SKL_PLUS, dpll_status_read, NULL);
MMIO_DH(SKL_PS_WIN_POS(PIPE_A, 0), D_SKL_PLUS, NULL, pf_write);
MMIO_DH(SKL_PS_WIN_POS(PIPE_A, 1), D_SKL_PLUS, NULL, pf_write);
@ -2805,53 +2922,57 @@ static int init_skl_mmio_info(struct intel_gvt *gvt)
MMIO_D(_MMIO(0x7239c), D_SKL_PLUS);
MMIO_D(_MMIO(0x7039c), D_SKL_PLUS);
MMIO_D(_MMIO(0x8f074), D_SKL | D_KBL);
MMIO_D(_MMIO(0x8f004), D_SKL | D_KBL);
MMIO_D(_MMIO(0x8f034), D_SKL | D_KBL);
MMIO_D(_MMIO(0x8f074), D_SKL_PLUS);
MMIO_D(_MMIO(0x8f004), D_SKL_PLUS);
MMIO_D(_MMIO(0x8f034), D_SKL_PLUS);
MMIO_D(_MMIO(0xb11c), D_SKL | D_KBL);
MMIO_D(_MMIO(0xb11c), D_SKL_PLUS);
MMIO_D(_MMIO(0x51000), D_SKL | D_KBL);
MMIO_D(_MMIO(0x51000), D_SKL_PLUS);
MMIO_D(_MMIO(0x6c00c), D_SKL_PLUS);
MMIO_F(_MMIO(0xc800), 0x7f8, F_CMD_ACCESS, 0, 0, D_SKL | D_KBL, NULL, NULL);
MMIO_F(_MMIO(0xb020), 0x80, F_CMD_ACCESS, 0, 0, D_SKL | D_KBL, NULL, NULL);
MMIO_F(_MMIO(0xc800), 0x7f8, F_CMD_ACCESS, 0, 0, D_SKL_PLUS,
NULL, NULL);
MMIO_F(_MMIO(0xb020), 0x80, F_CMD_ACCESS, 0, 0, D_SKL_PLUS,
NULL, NULL);
MMIO_D(RPM_CONFIG0, D_SKL_PLUS);
MMIO_D(_MMIO(0xd08), D_SKL_PLUS);
MMIO_D(RC6_LOCATION, D_SKL_PLUS);
MMIO_DFH(_MMIO(0x20e0), D_SKL_PLUS, F_MODE_MASK, NULL, NULL);
MMIO_DFH(_MMIO(0x20ec), D_SKL_PLUS, F_MODE_MASK | F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(_MMIO(0x20ec), D_SKL_PLUS, F_MODE_MASK | F_CMD_ACCESS,
NULL, NULL);
/* TRTT */
MMIO_DFH(_MMIO(0x4de0), D_SKL | D_KBL, F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(_MMIO(0x4de4), D_SKL | D_KBL, F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(_MMIO(0x4de8), D_SKL | D_KBL, F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(_MMIO(0x4dec), D_SKL | D_KBL, F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(_MMIO(0x4df0), D_SKL | D_KBL, F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(_MMIO(0x4df4), D_SKL | D_KBL, F_CMD_ACCESS, NULL, gen9_trtte_write);
MMIO_DH(_MMIO(0x4dfc), D_SKL | D_KBL, NULL, gen9_trtt_chicken_write);
MMIO_DFH(_MMIO(0x4de0), D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(_MMIO(0x4de4), D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(_MMIO(0x4de8), D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(_MMIO(0x4dec), D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(_MMIO(0x4df0), D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(_MMIO(0x4df4), D_SKL_PLUS, F_CMD_ACCESS,
NULL, gen9_trtte_write);
MMIO_DH(_MMIO(0x4dfc), D_SKL_PLUS, NULL, gen9_trtt_chicken_write);
MMIO_D(_MMIO(0x45008), D_SKL | D_KBL);
MMIO_D(_MMIO(0x45008), D_SKL_PLUS);
MMIO_D(_MMIO(0x46430), D_SKL | D_KBL);
MMIO_D(_MMIO(0x46430), D_SKL_PLUS);
MMIO_D(_MMIO(0x46520), D_SKL | D_KBL);
MMIO_D(_MMIO(0x46520), D_SKL_PLUS);
MMIO_D(_MMIO(0xc403c), D_SKL | D_KBL);
MMIO_D(_MMIO(0xc403c), D_SKL_PLUS);
MMIO_D(_MMIO(0xb004), D_SKL_PLUS);
MMIO_DH(DMA_CTRL, D_SKL_PLUS, NULL, dma_ctrl_write);
MMIO_D(_MMIO(0x65900), D_SKL_PLUS);
MMIO_D(_MMIO(0x1082c0), D_SKL | D_KBL);
MMIO_D(_MMIO(0x4068), D_SKL | D_KBL);
MMIO_D(_MMIO(0x67054), D_SKL | D_KBL);
MMIO_D(_MMIO(0x6e560), D_SKL | D_KBL);
MMIO_D(_MMIO(0x6e554), D_SKL | D_KBL);
MMIO_D(_MMIO(0x2b20), D_SKL | D_KBL);
MMIO_D(_MMIO(0x65f00), D_SKL | D_KBL);
MMIO_D(_MMIO(0x65f08), D_SKL | D_KBL);
MMIO_D(_MMIO(0x320f0), D_SKL | D_KBL);
MMIO_D(_MMIO(0x1082c0), D_SKL_PLUS);
MMIO_D(_MMIO(0x4068), D_SKL_PLUS);
MMIO_D(_MMIO(0x67054), D_SKL_PLUS);
MMIO_D(_MMIO(0x6e560), D_SKL_PLUS);
MMIO_D(_MMIO(0x6e554), D_SKL_PLUS);
MMIO_D(_MMIO(0x2b20), D_SKL_PLUS);
MMIO_D(_MMIO(0x65f00), D_SKL_PLUS);
MMIO_D(_MMIO(0x65f08), D_SKL_PLUS);
MMIO_D(_MMIO(0x320f0), D_SKL_PLUS);
MMIO_D(_MMIO(0x70034), D_SKL_PLUS);
MMIO_D(_MMIO(0x71034), D_SKL_PLUS);
@ -2869,11 +2990,185 @@ static int init_skl_mmio_info(struct intel_gvt *gvt)
MMIO_D(_MMIO(0x44500), D_SKL_PLUS);
MMIO_DFH(GEN9_CSFE_CHICKEN1_RCS, D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(GEN8_HDC_CHICKEN1, D_SKL | D_KBL, F_MODE_MASK | F_CMD_ACCESS,
MMIO_DFH(GEN8_HDC_CHICKEN1, D_SKL_PLUS, F_MODE_MASK | F_CMD_ACCESS,
NULL, NULL);
MMIO_D(_MMIO(0x4ab8), D_KBL);
MMIO_D(_MMIO(0x2248), D_SKL_PLUS | D_KBL);
MMIO_D(_MMIO(0x2248), D_KBL | D_SKL);
return 0;
}
static int init_bxt_mmio_info(struct intel_gvt *gvt)
{
struct drm_i915_private *dev_priv = gvt->dev_priv;
int ret;
MMIO_F(_MMIO(0x80000), 0x3000, 0, 0, 0, D_BXT, NULL, NULL);
MMIO_D(GEN7_SAMPLER_INSTDONE, D_BXT);
MMIO_D(GEN7_ROW_INSTDONE, D_BXT);
MMIO_D(GEN8_FAULT_TLB_DATA0, D_BXT);
MMIO_D(GEN8_FAULT_TLB_DATA1, D_BXT);
MMIO_D(ERROR_GEN6, D_BXT);
MMIO_D(DONE_REG, D_BXT);
MMIO_D(EIR, D_BXT);
MMIO_D(PGTBL_ER, D_BXT);
MMIO_D(_MMIO(0x4194), D_BXT);
MMIO_D(_MMIO(0x4294), D_BXT);
MMIO_D(_MMIO(0x4494), D_BXT);
MMIO_RING_D(RING_PSMI_CTL, D_BXT);
MMIO_RING_D(RING_DMA_FADD, D_BXT);
MMIO_RING_D(RING_DMA_FADD_UDW, D_BXT);
MMIO_RING_D(RING_IPEHR, D_BXT);
MMIO_RING_D(RING_INSTPS, D_BXT);
MMIO_RING_D(RING_BBADDR_UDW, D_BXT);
MMIO_RING_D(RING_BBSTATE, D_BXT);
MMIO_RING_D(RING_IPEIR, D_BXT);
MMIO_F(SOFT_SCRATCH(0), 16 * 4, 0, 0, 0, D_BXT, NULL, NULL);
MMIO_DH(BXT_P_CR_GT_DISP_PWRON, D_BXT, NULL, bxt_gt_disp_pwron_write);
MMIO_D(BXT_RP_STATE_CAP, D_BXT);
MMIO_DH(BXT_PHY_CTL_FAMILY(DPIO_PHY0), D_BXT,
NULL, bxt_phy_ctl_family_write);
MMIO_DH(BXT_PHY_CTL_FAMILY(DPIO_PHY1), D_BXT,
NULL, bxt_phy_ctl_family_write);
MMIO_D(BXT_PHY_CTL(PORT_A), D_BXT);
MMIO_D(BXT_PHY_CTL(PORT_B), D_BXT);
MMIO_D(BXT_PHY_CTL(PORT_C), D_BXT);
MMIO_DH(BXT_PORT_PLL_ENABLE(PORT_A), D_BXT,
NULL, bxt_port_pll_enable_write);
MMIO_DH(BXT_PORT_PLL_ENABLE(PORT_B), D_BXT,
NULL, bxt_port_pll_enable_write);
MMIO_DH(BXT_PORT_PLL_ENABLE(PORT_C), D_BXT, NULL,
bxt_port_pll_enable_write);
MMIO_D(BXT_PORT_CL1CM_DW0(DPIO_PHY0), D_BXT);
MMIO_D(BXT_PORT_CL1CM_DW9(DPIO_PHY0), D_BXT);
MMIO_D(BXT_PORT_CL1CM_DW10(DPIO_PHY0), D_BXT);
MMIO_D(BXT_PORT_CL1CM_DW28(DPIO_PHY0), D_BXT);
MMIO_D(BXT_PORT_CL1CM_DW30(DPIO_PHY0), D_BXT);
MMIO_D(BXT_PORT_CL2CM_DW6(DPIO_PHY0), D_BXT);
MMIO_D(BXT_PORT_REF_DW3(DPIO_PHY0), D_BXT);
MMIO_D(BXT_PORT_REF_DW6(DPIO_PHY0), D_BXT);
MMIO_D(BXT_PORT_REF_DW8(DPIO_PHY0), D_BXT);
MMIO_D(BXT_PORT_CL1CM_DW0(DPIO_PHY1), D_BXT);
MMIO_D(BXT_PORT_CL1CM_DW9(DPIO_PHY1), D_BXT);
MMIO_D(BXT_PORT_CL1CM_DW10(DPIO_PHY1), D_BXT);
MMIO_D(BXT_PORT_CL1CM_DW28(DPIO_PHY1), D_BXT);
MMIO_D(BXT_PORT_CL1CM_DW30(DPIO_PHY1), D_BXT);
MMIO_D(BXT_PORT_CL2CM_DW6(DPIO_PHY1), D_BXT);
MMIO_D(BXT_PORT_REF_DW3(DPIO_PHY1), D_BXT);
MMIO_D(BXT_PORT_REF_DW6(DPIO_PHY1), D_BXT);
MMIO_D(BXT_PORT_REF_DW8(DPIO_PHY1), D_BXT);
MMIO_D(BXT_PORT_PLL_EBB_0(DPIO_PHY0, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_PLL_EBB_4(DPIO_PHY0, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_PCS_DW10_LN01(DPIO_PHY0, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_PCS_DW10_GRP(DPIO_PHY0, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_PCS_DW12_LN01(DPIO_PHY0, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_PCS_DW12_LN23(DPIO_PHY0, DPIO_CH0), D_BXT);
MMIO_DH(BXT_PORT_PCS_DW12_GRP(DPIO_PHY0, DPIO_CH0), D_BXT,
NULL, bxt_pcs_dw12_grp_write);
MMIO_D(BXT_PORT_TX_DW2_LN0(DPIO_PHY0, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_TX_DW2_GRP(DPIO_PHY0, DPIO_CH0), D_BXT);
MMIO_DH(BXT_PORT_TX_DW3_LN0(DPIO_PHY0, DPIO_CH0), D_BXT,
bxt_port_tx_dw3_read, NULL);
MMIO_D(BXT_PORT_TX_DW3_GRP(DPIO_PHY0, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_TX_DW4_LN0(DPIO_PHY0, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_TX_DW4_GRP(DPIO_PHY0, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH0, 0), D_BXT);
MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH0, 1), D_BXT);
MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH0, 2), D_BXT);
MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH0, 3), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 0), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 1), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 2), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 3), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 6), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 8), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 9), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 10), D_BXT);
MMIO_D(BXT_PORT_PLL_EBB_0(DPIO_PHY0, DPIO_CH1), D_BXT);
MMIO_D(BXT_PORT_PLL_EBB_4(DPIO_PHY0, DPIO_CH1), D_BXT);
MMIO_D(BXT_PORT_PCS_DW10_LN01(DPIO_PHY0, DPIO_CH1), D_BXT);
MMIO_D(BXT_PORT_PCS_DW10_GRP(DPIO_PHY0, DPIO_CH1), D_BXT);
MMIO_D(BXT_PORT_PCS_DW12_LN01(DPIO_PHY0, DPIO_CH1), D_BXT);
MMIO_D(BXT_PORT_PCS_DW12_LN23(DPIO_PHY0, DPIO_CH1), D_BXT);
MMIO_DH(BXT_PORT_PCS_DW12_GRP(DPIO_PHY0, DPIO_CH1), D_BXT,
NULL, bxt_pcs_dw12_grp_write);
MMIO_D(BXT_PORT_TX_DW2_LN0(DPIO_PHY0, DPIO_CH1), D_BXT);
MMIO_D(BXT_PORT_TX_DW2_GRP(DPIO_PHY0, DPIO_CH1), D_BXT);
MMIO_DH(BXT_PORT_TX_DW3_LN0(DPIO_PHY0, DPIO_CH1), D_BXT,
bxt_port_tx_dw3_read, NULL);
MMIO_D(BXT_PORT_TX_DW3_GRP(DPIO_PHY0, DPIO_CH1), D_BXT);
MMIO_D(BXT_PORT_TX_DW4_LN0(DPIO_PHY0, DPIO_CH1), D_BXT);
MMIO_D(BXT_PORT_TX_DW4_GRP(DPIO_PHY0, DPIO_CH1), D_BXT);
MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH1, 0), D_BXT);
MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH1, 1), D_BXT);
MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH1, 2), D_BXT);
MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH1, 3), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 0), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 1), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 2), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 3), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 6), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 8), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 9), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 10), D_BXT);
MMIO_D(BXT_PORT_PLL_EBB_0(DPIO_PHY1, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_PLL_EBB_4(DPIO_PHY1, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_PCS_DW10_LN01(DPIO_PHY1, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_PCS_DW10_GRP(DPIO_PHY1, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_PCS_DW12_LN01(DPIO_PHY1, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_PCS_DW12_LN23(DPIO_PHY1, DPIO_CH0), D_BXT);
MMIO_DH(BXT_PORT_PCS_DW12_GRP(DPIO_PHY1, DPIO_CH0), D_BXT,
NULL, bxt_pcs_dw12_grp_write);
MMIO_D(BXT_PORT_TX_DW2_LN0(DPIO_PHY1, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_TX_DW2_GRP(DPIO_PHY1, DPIO_CH0), D_BXT);
MMIO_DH(BXT_PORT_TX_DW3_LN0(DPIO_PHY1, DPIO_CH0), D_BXT,
bxt_port_tx_dw3_read, NULL);
MMIO_D(BXT_PORT_TX_DW3_GRP(DPIO_PHY1, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_TX_DW4_LN0(DPIO_PHY1, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_TX_DW4_GRP(DPIO_PHY1, DPIO_CH0), D_BXT);
MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY1, DPIO_CH0, 0), D_BXT);
MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY1, DPIO_CH0, 1), D_BXT);
MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY1, DPIO_CH0, 2), D_BXT);
MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY1, DPIO_CH0, 3), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 0), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 1), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 2), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 3), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 6), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 8), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 9), D_BXT);
MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 10), D_BXT);
MMIO_D(BXT_DE_PLL_CTL, D_BXT);
MMIO_DH(BXT_DE_PLL_ENABLE, D_BXT, NULL, bxt_de_pll_enable_write);
MMIO_D(BXT_DSI_PLL_CTL, D_BXT);
MMIO_D(BXT_DSI_PLL_ENABLE, D_BXT);
MMIO_D(GEN9_CLKGATE_DIS_0, D_BXT);
MMIO_D(HSW_TVIDEO_DIP_GCP(TRANSCODER_A), D_BXT);
MMIO_D(HSW_TVIDEO_DIP_GCP(TRANSCODER_B), D_BXT);
MMIO_D(HSW_TVIDEO_DIP_GCP(TRANSCODER_C), D_BXT);
MMIO_D(RC6_CTX_BASE, D_BXT);
MMIO_D(GEN8_PUSHBUS_CONTROL, D_BXT);
MMIO_D(GEN8_PUSHBUS_ENABLE, D_BXT);
MMIO_D(GEN8_PUSHBUS_SHIFT, D_BXT);
MMIO_D(GEN6_GFXPAUSE, D_BXT);
MMIO_D(GEN8_L3SQCREG1, D_BXT);
MMIO_DFH(GEN9_CTX_PREEMPT_REG, D_BXT, F_CMD_ACCESS, NULL, NULL);
return 0;
}
@ -2965,6 +3260,16 @@ int intel_gvt_setup_mmio_info(struct intel_gvt *gvt)
ret = init_skl_mmio_info(gvt);
if (ret)
goto err;
} else if (IS_BROXTON(dev_priv)) {
ret = init_broadwell_mmio_info(gvt);
if (ret)
goto err;
ret = init_skl_mmio_info(gvt);
if (ret)
goto err;
ret = init_bxt_mmio_info(gvt);
if (ret)
goto err;
}
gvt->mmio.mmio_block = mmio_blocks;

View file

@ -350,7 +350,8 @@ static void update_upstream_irq(struct intel_vgpu *vgpu,
clear_bits |= (1 << bit);
}
WARN_ON(!up_irq_info);
if (WARN_ON(!up_irq_info))
return;
if (up_irq_info->group == INTEL_GVT_IRQ_INFO_MASTER) {
u32 isr = i915_mmio_reg_offset(up_irq_info->reg_base);
@ -580,7 +581,9 @@ static void gen8_init_irq(
SET_BIT_INFO(irq, 4, PRIMARY_C_FLIP_DONE, INTEL_GVT_IRQ_INFO_DE_PIPE_C);
SET_BIT_INFO(irq, 5, SPRITE_C_FLIP_DONE, INTEL_GVT_IRQ_INFO_DE_PIPE_C);
} else if (IS_SKYLAKE(gvt->dev_priv) || IS_KABYLAKE(gvt->dev_priv)) {
} else if (IS_SKYLAKE(gvt->dev_priv)
|| IS_KABYLAKE(gvt->dev_priv)
|| IS_BROXTON(gvt->dev_priv)) {
SET_BIT_INFO(irq, 25, AUX_CHANNEL_B, INTEL_GVT_IRQ_INFO_DE_PORT);
SET_BIT_INFO(irq, 26, AUX_CHANNEL_C, INTEL_GVT_IRQ_INFO_DE_PORT);
SET_BIT_INFO(irq, 27, AUX_CHANNEL_D, INTEL_GVT_IRQ_INFO_DE_PORT);
@ -690,14 +693,8 @@ int intel_gvt_init_irq(struct intel_gvt *gvt)
gvt_dbg_core("init irq framework\n");
if (IS_BROADWELL(gvt->dev_priv) || IS_SKYLAKE(gvt->dev_priv)
|| IS_KABYLAKE(gvt->dev_priv)) {
irq->ops = &gen8_irq_ops;
irq->irq_map = gen8_irq_map;
} else {
WARN_ON(1);
return -ENODEV;
}
irq->ops = &gen8_irq_ops;
irq->irq_map = gen8_irq_map;
/* common event initialization */
init_events(irq);

View file

@ -67,7 +67,7 @@ static void failsafe_emulate_mmio_rw(struct intel_vgpu *vgpu, uint64_t pa,
return;
gvt = vgpu->gvt;
mutex_lock(&gvt->lock);
mutex_lock(&vgpu->vgpu_lock);
offset = intel_vgpu_gpa_to_mmio_offset(vgpu, pa);
if (reg_is_mmio(gvt, offset)) {
if (read)
@ -85,7 +85,7 @@ static void failsafe_emulate_mmio_rw(struct intel_vgpu *vgpu, uint64_t pa,
memcpy(pt, p_data, bytes);
}
mutex_unlock(&gvt->lock);
mutex_unlock(&vgpu->vgpu_lock);
}
/**
@ -109,7 +109,7 @@ int intel_vgpu_emulate_mmio_read(struct intel_vgpu *vgpu, uint64_t pa,
failsafe_emulate_mmio_rw(vgpu, pa, p_data, bytes, true);
return 0;
}
mutex_lock(&gvt->lock);
mutex_lock(&vgpu->vgpu_lock);
offset = intel_vgpu_gpa_to_mmio_offset(vgpu, pa);
@ -156,7 +156,7 @@ int intel_vgpu_emulate_mmio_read(struct intel_vgpu *vgpu, uint64_t pa,
gvt_vgpu_err("fail to emulate MMIO read %08x len %d\n",
offset, bytes);
out:
mutex_unlock(&gvt->lock);
mutex_unlock(&vgpu->vgpu_lock);
return ret;
}
@ -182,7 +182,7 @@ int intel_vgpu_emulate_mmio_write(struct intel_vgpu *vgpu, uint64_t pa,
return 0;
}
mutex_lock(&gvt->lock);
mutex_lock(&vgpu->vgpu_lock);
offset = intel_vgpu_gpa_to_mmio_offset(vgpu, pa);
@ -220,7 +220,7 @@ int intel_vgpu_emulate_mmio_write(struct intel_vgpu *vgpu, uint64_t pa,
gvt_vgpu_err("fail to emulate MMIO write %08x len %d\n", offset,
bytes);
out:
mutex_unlock(&gvt->lock);
mutex_unlock(&vgpu->vgpu_lock);
return ret;
}

View file

@ -42,15 +42,16 @@ struct intel_vgpu;
#define D_BDW (1 << 0)
#define D_SKL (1 << 1)
#define D_KBL (1 << 2)
#define D_BXT (1 << 3)
#define D_GEN9PLUS (D_SKL | D_KBL)
#define D_GEN8PLUS (D_BDW | D_SKL | D_KBL)
#define D_GEN9PLUS (D_SKL | D_KBL | D_BXT)
#define D_GEN8PLUS (D_BDW | D_SKL | D_KBL | D_BXT)
#define D_SKL_PLUS (D_SKL | D_KBL)
#define D_BDW_PLUS (D_BDW | D_SKL | D_KBL)
#define D_SKL_PLUS (D_SKL | D_KBL | D_BXT)
#define D_BDW_PLUS (D_BDW | D_SKL | D_KBL | D_BXT)
#define D_PRE_SKL (D_BDW)
#define D_ALL (D_BDW | D_SKL | D_KBL)
#define D_ALL (D_BDW | D_SKL | D_KBL | D_BXT)
typedef int (*gvt_mmio_func)(struct intel_vgpu *, unsigned int, void *,
unsigned int);

View file

@ -364,7 +364,8 @@ static void handle_tlb_pending_event(struct intel_vgpu *vgpu, int ring_id)
*/
fw = intel_uncore_forcewake_for_reg(dev_priv, reg,
FW_REG_READ | FW_REG_WRITE);
if (ring_id == RCS && (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)))
if (ring_id == RCS && (IS_SKYLAKE(dev_priv) ||
IS_KABYLAKE(dev_priv) || IS_BROXTON(dev_priv)))
fw |= FORCEWAKE_RENDER;
intel_uncore_forcewake_get(dev_priv, fw);
@ -401,7 +402,7 @@ static void switch_mocs(struct intel_vgpu *pre, struct intel_vgpu *next,
if (WARN_ON(ring_id >= ARRAY_SIZE(regs)))
return;
if (IS_KABYLAKE(dev_priv) && ring_id == RCS)
if ((IS_KABYLAKE(dev_priv) || IS_BROXTON(dev_priv)) && ring_id == RCS)
return;
if (!pre && !gen9_render_mocs.initialized)
@ -467,7 +468,9 @@ static void switch_mmio(struct intel_vgpu *pre,
u32 old_v, new_v;
dev_priv = pre ? pre->gvt->dev_priv : next->gvt->dev_priv;
if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv))
if (IS_SKYLAKE(dev_priv)
|| IS_KABYLAKE(dev_priv)
|| IS_BROXTON(dev_priv))
switch_mocs(pre, next, ring_id);
for (mmio = dev_priv->gvt->engine_mmio_list.mmio;
@ -479,7 +482,8 @@ static void switch_mmio(struct intel_vgpu *pre,
* state image on kabylake, it's initialized by lri command and
* save or restore with context together.
*/
if (IS_KABYLAKE(dev_priv) && mmio->in_context)
if ((IS_KABYLAKE(dev_priv) || IS_BROXTON(dev_priv))
&& mmio->in_context)
continue;
// save
@ -574,7 +578,9 @@ void intel_gvt_init_engine_mmio_context(struct intel_gvt *gvt)
{
struct engine_mmio *mmio;
if (IS_SKYLAKE(gvt->dev_priv) || IS_KABYLAKE(gvt->dev_priv))
if (IS_SKYLAKE(gvt->dev_priv) ||
IS_KABYLAKE(gvt->dev_priv) ||
IS_BROXTON(gvt->dev_priv))
gvt->engine_mmio_list.mmio = gen9_engine_mmio_list;
else
gvt->engine_mmio_list.mmio = gen8_engine_mmio_list;

View file

@ -157,11 +157,10 @@ int intel_vgpu_disable_page_track(struct intel_vgpu *vgpu, unsigned long gfn)
int intel_vgpu_page_track_handler(struct intel_vgpu *vgpu, u64 gpa,
void *data, unsigned int bytes)
{
struct intel_gvt *gvt = vgpu->gvt;
struct intel_vgpu_page_track *page_track;
int ret = 0;
mutex_lock(&gvt->lock);
mutex_lock(&vgpu->vgpu_lock);
page_track = intel_vgpu_find_page_track(vgpu, gpa >> PAGE_SHIFT);
if (!page_track) {
@ -179,6 +178,6 @@ int intel_vgpu_page_track_handler(struct intel_vgpu *vgpu, u64 gpa,
}
out:
mutex_unlock(&gvt->lock);
mutex_unlock(&vgpu->vgpu_lock);
return ret;
}

View file

@ -228,7 +228,7 @@ void intel_gvt_schedule(struct intel_gvt *gvt)
struct gvt_sched_data *sched_data = gvt->scheduler.sched_data;
ktime_t cur_time;
mutex_lock(&gvt->lock);
mutex_lock(&gvt->sched_lock);
cur_time = ktime_get();
if (test_and_clear_bit(INTEL_GVT_REQUEST_SCHED,
@ -244,7 +244,7 @@ void intel_gvt_schedule(struct intel_gvt *gvt)
vgpu_update_timeslice(gvt->scheduler.current_vgpu, cur_time);
tbs_sched_func(sched_data);
mutex_unlock(&gvt->lock);
mutex_unlock(&gvt->sched_lock);
}
static enum hrtimer_restart tbs_timer_fn(struct hrtimer *timer_data)
@ -359,39 +359,65 @@ static struct intel_gvt_sched_policy_ops tbs_schedule_ops = {
int intel_gvt_init_sched_policy(struct intel_gvt *gvt)
{
gvt->scheduler.sched_ops = &tbs_schedule_ops;
int ret;
return gvt->scheduler.sched_ops->init(gvt);
mutex_lock(&gvt->sched_lock);
gvt->scheduler.sched_ops = &tbs_schedule_ops;
ret = gvt->scheduler.sched_ops->init(gvt);
mutex_unlock(&gvt->sched_lock);
return ret;
}
void intel_gvt_clean_sched_policy(struct intel_gvt *gvt)
{
mutex_lock(&gvt->sched_lock);
gvt->scheduler.sched_ops->clean(gvt);
mutex_unlock(&gvt->sched_lock);
}
/* for per-vgpu scheduler policy, there are 2 per-vgpu data:
* sched_data, and sched_ctl. We see these 2 data as part of
* the global scheduler which are proteced by gvt->sched_lock.
* Caller should make their decision if the vgpu_lock should
* be hold outside.
*/
int intel_vgpu_init_sched_policy(struct intel_vgpu *vgpu)
{
return vgpu->gvt->scheduler.sched_ops->init_vgpu(vgpu);
int ret;
mutex_lock(&vgpu->gvt->sched_lock);
ret = vgpu->gvt->scheduler.sched_ops->init_vgpu(vgpu);
mutex_unlock(&vgpu->gvt->sched_lock);
return ret;
}
void intel_vgpu_clean_sched_policy(struct intel_vgpu *vgpu)
{
mutex_lock(&vgpu->gvt->sched_lock);
vgpu->gvt->scheduler.sched_ops->clean_vgpu(vgpu);
mutex_unlock(&vgpu->gvt->sched_lock);
}
void intel_vgpu_start_schedule(struct intel_vgpu *vgpu)
{
struct vgpu_sched_data *vgpu_data = vgpu->sched_data;
mutex_lock(&vgpu->gvt->sched_lock);
if (!vgpu_data->active) {
gvt_dbg_core("vgpu%d: start schedule\n", vgpu->id);
vgpu->gvt->scheduler.sched_ops->start_schedule(vgpu);
}
mutex_unlock(&vgpu->gvt->sched_lock);
}
void intel_gvt_kick_schedule(struct intel_gvt *gvt)
{
mutex_lock(&gvt->sched_lock);
intel_gvt_request_service(gvt, INTEL_GVT_REQUEST_EVENT_SCHED);
mutex_unlock(&gvt->sched_lock);
}
void intel_vgpu_stop_schedule(struct intel_vgpu *vgpu)
@ -406,6 +432,7 @@ void intel_vgpu_stop_schedule(struct intel_vgpu *vgpu)
gvt_dbg_core("vgpu%d: stop schedule\n", vgpu->id);
mutex_lock(&vgpu->gvt->sched_lock);
scheduler->sched_ops->stop_schedule(vgpu);
if (scheduler->next_vgpu == vgpu)
@ -425,4 +452,5 @@ void intel_vgpu_stop_schedule(struct intel_vgpu *vgpu)
}
}
spin_unlock_bh(&scheduler->mmio_context_lock);
mutex_unlock(&vgpu->gvt->sched_lock);
}

View file

@ -45,11 +45,10 @@ static void set_context_pdp_root_pointer(
struct execlist_ring_context *ring_context,
u32 pdp[8])
{
struct execlist_mmio_pair *pdp_pair = &ring_context->pdp3_UDW;
int i;
for (i = 0; i < 8; i++)
pdp_pair[i].val = pdp[7 - i];
ring_context->pdps[i].val = pdp[7 - i];
}
static void update_shadow_pdps(struct intel_vgpu_workload *workload)
@ -298,7 +297,8 @@ static int copy_workload_to_ring_buffer(struct intel_vgpu_workload *workload)
void *shadow_ring_buffer_va;
u32 *cs;
if (IS_KABYLAKE(req->i915) && is_inhibit_context(req->hw_context))
if ((IS_KABYLAKE(req->i915) || IS_BROXTON(req->i915))
&& is_inhibit_context(req->hw_context))
intel_vgpu_restore_inhibit_context(vgpu, req);
/* allocate shadow ring buffer */
@ -634,6 +634,7 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
gvt_dbg_sched("ring id %d prepare to dispatch workload %p\n",
ring_id, workload);
mutex_lock(&vgpu->vgpu_lock);
mutex_lock(&dev_priv->drm.struct_mutex);
ret = intel_gvt_scan_and_shadow_workload(workload);
@ -654,6 +655,7 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
}
mutex_unlock(&dev_priv->drm.struct_mutex);
mutex_unlock(&vgpu->vgpu_lock);
return ret;
}
@ -663,7 +665,7 @@ static struct intel_vgpu_workload *pick_next_workload(
struct intel_gvt_workload_scheduler *scheduler = &gvt->scheduler;
struct intel_vgpu_workload *workload = NULL;
mutex_lock(&gvt->lock);
mutex_lock(&gvt->sched_lock);
/*
* no current vgpu / will be scheduled out / no workload
@ -709,7 +711,7 @@ static struct intel_vgpu_workload *pick_next_workload(
atomic_inc(&workload->vgpu->submission.running_workload_num);
out:
mutex_unlock(&gvt->lock);
mutex_unlock(&gvt->sched_lock);
return workload;
}
@ -807,7 +809,8 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id)
struct i915_request *rq = workload->req;
int event;
mutex_lock(&gvt->lock);
mutex_lock(&vgpu->vgpu_lock);
mutex_lock(&gvt->sched_lock);
/* For the workload w/ request, needs to wait for the context
* switch to make sure request is completed.
@ -883,7 +886,8 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id)
if (gvt->scheduler.need_reschedule)
intel_gvt_request_service(gvt, INTEL_GVT_REQUEST_EVENT_SCHED);
mutex_unlock(&gvt->lock);
mutex_unlock(&gvt->sched_lock);
mutex_unlock(&vgpu->vgpu_lock);
}
struct workload_thread_param {
@ -901,7 +905,8 @@ static int workload_thread(void *priv)
struct intel_vgpu *vgpu = NULL;
int ret;
bool need_force_wake = IS_SKYLAKE(gvt->dev_priv)
|| IS_KABYLAKE(gvt->dev_priv);
|| IS_KABYLAKE(gvt->dev_priv)
|| IS_BROXTON(gvt->dev_priv);
DEFINE_WAIT_FUNC(wait, woken_wake_function);
kfree(p);
@ -935,9 +940,7 @@ static int workload_thread(void *priv)
intel_uncore_forcewake_get(gvt->dev_priv,
FORCEWAKE_ALL);
mutex_lock(&gvt->lock);
ret = dispatch_workload(workload);
mutex_unlock(&gvt->lock);
if (ret) {
vgpu = workload->vgpu;
@ -1228,7 +1231,7 @@ static void read_guest_pdps(struct intel_vgpu *vgpu,
u64 gpa;
int i;
gpa = ring_context_gpa + RING_CTX_OFF(pdp3_UDW.val);
gpa = ring_context_gpa + RING_CTX_OFF(pdps[0].val);
for (i = 0; i < 8; i++)
intel_gvt_hypervisor_read_gpa(vgpu,

View file

@ -58,6 +58,9 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.fence_num)) = vgpu_fence_sz(vgpu);
vgpu_vreg_t(vgpu, vgtif_reg(cursor_x_hot)) = UINT_MAX;
vgpu_vreg_t(vgpu, vgtif_reg(cursor_y_hot)) = UINT_MAX;
gvt_dbg_core("Populate PVINFO PAGE for vGPU %d\n", vgpu->id);
gvt_dbg_core("aperture base [GMADR] 0x%llx size 0x%llx\n",
vgpu_aperture_gmadr_base(vgpu), vgpu_aperture_sz(vgpu));
@ -223,22 +226,20 @@ void intel_gvt_activate_vgpu(struct intel_vgpu *vgpu)
*/
void intel_gvt_deactivate_vgpu(struct intel_vgpu *vgpu)
{
struct intel_gvt *gvt = vgpu->gvt;
mutex_lock(&gvt->lock);
mutex_lock(&vgpu->vgpu_lock);
vgpu->active = false;
if (atomic_read(&vgpu->submission.running_workload_num)) {
mutex_unlock(&gvt->lock);
mutex_unlock(&vgpu->vgpu_lock);
intel_gvt_wait_vgpu_idle(vgpu);
mutex_lock(&gvt->lock);
mutex_lock(&vgpu->vgpu_lock);
}
intel_vgpu_stop_schedule(vgpu);
intel_vgpu_dmabuf_cleanup(vgpu);
mutex_unlock(&gvt->lock);
mutex_unlock(&vgpu->vgpu_lock);
}
/**
@ -252,14 +253,11 @@ void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu)
{
struct intel_gvt *gvt = vgpu->gvt;
mutex_lock(&gvt->lock);
mutex_lock(&vgpu->vgpu_lock);
WARN(vgpu->active, "vGPU is still active!\n");
intel_gvt_debugfs_remove_vgpu(vgpu);
idr_remove(&gvt->vgpu_idr, vgpu->id);
if (idr_is_empty(&gvt->vgpu_idr))
intel_gvt_clean_irq(gvt);
intel_vgpu_clean_sched_policy(vgpu);
intel_vgpu_clean_submission(vgpu);
intel_vgpu_clean_display(vgpu);
@ -269,10 +267,16 @@ void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu)
intel_vgpu_free_resource(vgpu);
intel_vgpu_clean_mmio(vgpu);
intel_vgpu_dmabuf_cleanup(vgpu);
vfree(vgpu);
mutex_unlock(&vgpu->vgpu_lock);
mutex_lock(&gvt->lock);
idr_remove(&gvt->vgpu_idr, vgpu->id);
if (idr_is_empty(&gvt->vgpu_idr))
intel_gvt_clean_irq(gvt);
intel_gvt_update_vgpu_types(gvt);
mutex_unlock(&gvt->lock);
vfree(vgpu);
}
#define IDLE_VGPU_IDR 0
@ -298,6 +302,7 @@ struct intel_vgpu *intel_gvt_create_idle_vgpu(struct intel_gvt *gvt)
vgpu->id = IDLE_VGPU_IDR;
vgpu->gvt = gvt;
mutex_init(&vgpu->vgpu_lock);
for (i = 0; i < I915_NUM_ENGINES; i++)
INIT_LIST_HEAD(&vgpu->submission.workload_q_head[i]);
@ -324,7 +329,10 @@ struct intel_vgpu *intel_gvt_create_idle_vgpu(struct intel_gvt *gvt)
*/
void intel_gvt_destroy_idle_vgpu(struct intel_vgpu *vgpu)
{
mutex_lock(&vgpu->vgpu_lock);
intel_vgpu_clean_sched_policy(vgpu);
mutex_unlock(&vgpu->vgpu_lock);
vfree(vgpu);
}
@ -342,8 +350,6 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
if (!vgpu)
return ERR_PTR(-ENOMEM);
mutex_lock(&gvt->lock);
ret = idr_alloc(&gvt->vgpu_idr, vgpu, IDLE_VGPU_IDR + 1, GVT_MAX_VGPU,
GFP_KERNEL);
if (ret < 0)
@ -353,6 +359,7 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
vgpu->handle = param->handle;
vgpu->gvt = gvt;
vgpu->sched_ctl.weight = param->weight;
mutex_init(&vgpu->vgpu_lock);
INIT_LIST_HEAD(&vgpu->dmabuf_obj_list_head);
INIT_RADIX_TREE(&vgpu->page_track_tree, GFP_KERNEL);
idr_init(&vgpu->object_idr);
@ -400,8 +407,6 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
if (ret)
goto out_clean_sched_policy;
mutex_unlock(&gvt->lock);
return vgpu;
out_clean_sched_policy:
@ -424,7 +429,6 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
idr_remove(&gvt->vgpu_idr, vgpu->id);
out_free_vgpu:
vfree(vgpu);
mutex_unlock(&gvt->lock);
return ERR_PTR(ret);
}
@ -456,12 +460,12 @@ struct intel_vgpu *intel_gvt_create_vgpu(struct intel_gvt *gvt,
param.low_gm_sz = BYTES_TO_MB(param.low_gm_sz);
param.high_gm_sz = BYTES_TO_MB(param.high_gm_sz);
mutex_lock(&gvt->lock);
vgpu = __intel_gvt_create_vgpu(gvt, &param);
if (IS_ERR(vgpu))
return vgpu;
/* calculate left instance change for types */
intel_gvt_update_vgpu_types(gvt);
if (!IS_ERR(vgpu))
/* calculate left instance change for types */
intel_gvt_update_vgpu_types(gvt);
mutex_unlock(&gvt->lock);
return vgpu;
}
@ -473,7 +477,7 @@ struct intel_vgpu *intel_gvt_create_vgpu(struct intel_gvt *gvt,
* @engine_mask: engines to reset for GT reset
*
* This function is called when user wants to reset a virtual GPU through
* device model reset or GT reset. The caller should hold the gvt lock.
* device model reset or GT reset. The caller should hold the vgpu lock.
*
* vGPU Device Model Level Reset (DMLR) simulates the PCI level reset to reset
* the whole vGPU to default state as when it is created. This vGPU function
@ -513,9 +517,9 @@ void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr,
* scheduler when the reset is triggered by current vgpu.
*/
if (scheduler->current_vgpu == NULL) {
mutex_unlock(&gvt->lock);
mutex_unlock(&vgpu->vgpu_lock);
intel_gvt_wait_vgpu_idle(vgpu);
mutex_lock(&gvt->lock);
mutex_lock(&vgpu->vgpu_lock);
}
intel_vgpu_reset_submission(vgpu, resetting_eng);
@ -555,7 +559,7 @@ void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr,
*/
void intel_gvt_reset_vgpu(struct intel_vgpu *vgpu)
{
mutex_lock(&vgpu->gvt->lock);
mutex_lock(&vgpu->vgpu_lock);
intel_gvt_reset_vgpu_locked(vgpu, true, 0);
mutex_unlock(&vgpu->gvt->lock);
mutex_unlock(&vgpu->vgpu_lock);
}

View file

@ -1359,11 +1359,12 @@ static int i915_hangcheck_info(struct seq_file *m, void *unused)
seq_printf(m, "\tseqno = %x [current %x, last %x]\n",
engine->hangcheck.seqno, seqno[id],
intel_engine_last_submit(engine));
seq_printf(m, "\twaiters? %s, fake irq active? %s, stalled? %s\n",
seq_printf(m, "\twaiters? %s, fake irq active? %s, stalled? %s, wedged? %s\n",
yesno(intel_engine_has_waiter(engine)),
yesno(test_bit(engine->id,
&dev_priv->gpu_error.missed_irq_rings)),
yesno(engine->hangcheck.stalled));
yesno(engine->hangcheck.stalled),
yesno(engine->hangcheck.wedged));
spin_lock_irq(&b->rb_lock);
for (rb = rb_first(&b->waiters); rb; rb = rb_next(rb)) {
@ -2536,7 +2537,7 @@ static int i915_guc_log_level_get(void *data, u64 *val)
if (!USES_GUC(dev_priv))
return -ENODEV;
*val = intel_guc_log_level_get(&dev_priv->guc.log);
*val = intel_guc_log_get_level(&dev_priv->guc.log);
return 0;
}
@ -2548,7 +2549,7 @@ static int i915_guc_log_level_set(void *data, u64 val)
if (!USES_GUC(dev_priv))
return -ENODEV;
return intel_guc_log_level_set(&dev_priv->guc.log, val);
return intel_guc_log_set_level(&dev_priv->guc.log, val);
}
DEFINE_SIMPLE_ATTRIBUTE(i915_guc_log_level_fops,
@ -2660,8 +2661,6 @@ static int i915_edp_psr_status(struct seq_file *m, void *data)
seq_printf(m, "Enabled: %s\n", yesno((bool)dev_priv->psr.enabled));
seq_printf(m, "Busy frontbuffer bits: 0x%03x\n",
dev_priv->psr.busy_frontbuffer_bits);
seq_printf(m, "Re-enable work scheduled: %s\n",
yesno(work_busy(&dev_priv->psr.work.work)));
if (dev_priv->psr.psr2_enabled)
enabled = I915_READ(EDP_PSR2_CTL) & EDP_PSR2_ENABLE;
@ -3379,28 +3378,13 @@ static int i915_shared_dplls_info(struct seq_file *m, void *unused)
static int i915_wa_registers(struct seq_file *m, void *unused)
{
struct drm_i915_private *dev_priv = node_to_i915(m->private);
struct i915_workarounds *workarounds = &dev_priv->workarounds;
struct i915_workarounds *wa = &node_to_i915(m->private)->workarounds;
int i;
intel_runtime_pm_get(dev_priv);
seq_printf(m, "Workarounds applied: %d\n", workarounds->count);
for (i = 0; i < workarounds->count; ++i) {
i915_reg_t addr;
u32 mask, value, read;
bool ok;
addr = workarounds->reg[i].addr;
mask = workarounds->reg[i].mask;
value = workarounds->reg[i].value;
read = I915_READ(addr);
ok = (value & mask) == (read & mask);
seq_printf(m, "0x%X: 0x%08X, mask: 0x%08X, read: 0x%08x, status: %s\n",
i915_mmio_reg_offset(addr), value, mask, read, ok ? "OK" : "FAIL");
}
intel_runtime_pm_put(dev_priv);
seq_printf(m, "Workarounds applied: %d\n", wa->count);
for (i = 0; i < wa->count; ++i)
seq_printf(m, "0x%X: 0x%08X, mask: 0x%08X\n",
wa->reg[i].addr, wa->reg[i].value, wa->reg[i].mask);
return 0;
}

View file

@ -73,6 +73,12 @@ bool __i915_inject_load_failure(const char *func, int line)
return false;
}
bool i915_error_injected(void)
{
return i915_load_fail_count && !i915_modparams.inject_load_failure;
}
#endif
#define FDO_BUG_URL "https://bugs.freedesktop.org/enter_bug.cgi?product=DRI"
@ -115,20 +121,6 @@ __i915_printk(struct drm_i915_private *dev_priv, const char *level,
va_end(args);
}
static bool i915_error_injected(struct drm_i915_private *dev_priv)
{
#if IS_ENABLED(CONFIG_DRM_I915_DEBUG)
return i915_load_fail_count && !i915_modparams.inject_load_failure;
#else
return false;
#endif
}
#define i915_load_error(i915, fmt, ...) \
__i915_printk(i915, \
i915_error_injected(i915) ? KERN_DEBUG : KERN_ERR, \
fmt, ##__VA_ARGS__)
/* Map PCH device id to PCH type, or PCH_NONE if unknown. */
static enum intel_pch
intel_pch_type(const struct drm_i915_private *dev_priv, unsigned short id)
@ -248,14 +240,6 @@ static void intel_detect_pch(struct drm_i915_private *dev_priv)
{
struct pci_dev *pch = NULL;
/* In all current cases, num_pipes is equivalent to the PCH_NOP setting
* (which really amounts to a PCH but no South Display).
*/
if (INTEL_INFO(dev_priv)->num_pipes == 0) {
dev_priv->pch_type = PCH_NOP;
return;
}
/*
* The reason to probe ISA bridge instead of Dev31:Fun0 is to
* make graphics device passthrough work easy for VMM, that only
@ -284,18 +268,28 @@ static void intel_detect_pch(struct drm_i915_private *dev_priv)
} else if (intel_is_virt_pch(id, pch->subsystem_vendor,
pch->subsystem_device)) {
id = intel_virt_detect_pch(dev_priv);
if (id) {
pch_type = intel_pch_type(dev_priv, id);
if (WARN_ON(pch_type == PCH_NONE))
pch_type = PCH_NOP;
} else {
pch_type = PCH_NOP;
}
pch_type = intel_pch_type(dev_priv, id);
/* Sanity check virtual PCH id */
if (WARN_ON(id && pch_type == PCH_NONE))
id = 0;
dev_priv->pch_type = pch_type;
dev_priv->pch_id = id;
break;
}
}
/*
* Use PCH_NOP (PCH but no South Display) for PCH platforms without
* display.
*/
if (pch && INTEL_INFO(dev_priv)->num_pipes == 0) {
DRM_DEBUG_KMS("Display disabled, reverting to NOP PCH\n");
dev_priv->pch_type = PCH_NOP;
dev_priv->pch_id = 0;
}
if (!pch)
DRM_DEBUG_KMS("No PCH found.\n");
@ -1704,6 +1698,8 @@ static int i915_drm_resume(struct drm_device *dev)
disable_rpm_wakeref_asserts(dev_priv);
intel_sanitize_gt_powersave(dev_priv);
i915_gem_sanitize(dev_priv);
ret = i915_ggtt_enable_hw(dev_priv);
if (ret)
DRM_ERROR("failed to re-enable GGTT\n");
@ -1845,7 +1841,7 @@ static int i915_drm_resume_early(struct drm_device *dev)
else
intel_display_set_init_power(dev_priv, true);
i915_gem_sanitize(dev_priv);
intel_engines_sanitize(dev_priv);
enable_rpm_wakeref_asserts(dev_priv);

View file

@ -40,6 +40,7 @@
#include <linux/hash.h>
#include <linux/intel-iommu.h>
#include <linux/kref.h>
#include <linux/mm_types.h>
#include <linux/perf_event.h>
#include <linux/pm_qos.h>
#include <linux/reservation.h>
@ -85,8 +86,8 @@
#define DRIVER_NAME "i915"
#define DRIVER_DESC "Intel Graphics"
#define DRIVER_DATE "20180606"
#define DRIVER_TIMESTAMP 1528323047
#define DRIVER_DATE "20180620"
#define DRIVER_TIMESTAMP 1529529048
/* Use I915_STATE_WARN(x) and I915_STATE_WARN_ON() (rather than WARN() and
* WARN_ON()) for hw state sanity checks to check for unexpected conditions
@ -107,13 +108,24 @@
I915_STATE_WARN((x), "%s", "WARN_ON(" __stringify(x) ")")
#if IS_ENABLED(CONFIG_DRM_I915_DEBUG)
bool __i915_inject_load_failure(const char *func, int line);
#define i915_inject_load_failure() \
__i915_inject_load_failure(__func__, __LINE__)
bool i915_error_injected(void);
#else
#define i915_inject_load_failure() false
#define i915_error_injected() false
#endif
#define i915_load_error(i915, fmt, ...) \
__i915_printk(i915, i915_error_injected() ? KERN_DEBUG : KERN_ERR, \
fmt, ##__VA_ARGS__)
typedef struct {
uint32_t val;
} uint_fixed_16_16_t;
@ -340,14 +352,21 @@ struct drm_i915_file_private {
unsigned int bsd_engine;
/* Client can have a maximum of 3 contexts banned before
* it is denied of creating new contexts. As one context
* ban needs 4 consecutive hangs, and more if there is
* progress in between, this is a last resort stop gap measure
* to limit the badly behaving clients access to gpu.
/*
* Every context ban increments per client ban score. Also
* hangs in short succession increments ban score. If ban threshold
* is reached, client is considered banned and submitting more work
* will fail. This is a stop gap measure to limit the badly behaving
* clients access to gpu. Note that unbannable contexts never increment
* the client ban score.
*/
#define I915_MAX_CLIENT_CONTEXT_BANS 3
atomic_t context_bans;
#define I915_CLIENT_SCORE_HANG_FAST 1
#define I915_CLIENT_FAST_HANG_JIFFIES (60 * HZ)
#define I915_CLIENT_SCORE_CONTEXT_BAN 3
#define I915_CLIENT_SCORE_BANNED 9
/** ban_score: Accumulated score of all ctx bans and fast hangs. */
atomic_t ban_score;
unsigned long hang_timestamp;
};
/* Interface history:
@ -601,7 +620,7 @@ struct i915_psr {
bool sink_support;
struct intel_dp *enabled;
bool active;
struct delayed_work work;
struct work_struct work;
unsigned busy_frontbuffer_bits;
bool sink_psr2_support;
bool link_standby;
@ -631,7 +650,7 @@ enum intel_pch {
PCH_KBP, /* Kaby Lake PCH */
PCH_CNP, /* Cannon Lake PCH */
PCH_ICP, /* Ice Lake PCH */
PCH_NOP,
PCH_NOP, /* PCH without south display */
};
enum intel_sbi_destination {
@ -994,6 +1013,8 @@ struct i915_gem_mm {
#define I915_ENGINE_DEAD_TIMEOUT (4 * HZ) /* Seqno, head and subunits dead */
#define I915_SEQNO_DEAD_TIMEOUT (12 * HZ) /* Seqno dead with active head */
#define I915_ENGINE_WEDGED_TIMEOUT (60 * HZ) /* Reset but no recovery? */
enum modeset_restore {
MODESET_ON_LID_OPEN,
MODESET_DONE,
@ -1004,6 +1025,7 @@ enum modeset_restore {
#define DP_AUX_B 0x10
#define DP_AUX_C 0x20
#define DP_AUX_D 0x30
#define DP_AUX_E 0x50
#define DP_AUX_F 0x60
#define DDC_PIN_B 0x05
@ -1290,7 +1312,7 @@ struct i915_frontbuffer_tracking {
};
struct i915_wa_reg {
i915_reg_t addr;
u32 addr;
u32 value;
/* bitmask representing WA bits */
u32 mask;
@ -3174,7 +3196,7 @@ int i915_gem_wait_for_idle(struct drm_i915_private *dev_priv,
int __must_check i915_gem_suspend(struct drm_i915_private *dev_priv);
void i915_gem_suspend_late(struct drm_i915_private *dev_priv);
void i915_gem_resume(struct drm_i915_private *dev_priv);
int i915_gem_fault(struct vm_fault *vmf);
vm_fault_t i915_gem_fault(struct vm_fault *vmf);
int i915_gem_object_wait(struct drm_i915_gem_object *obj,
unsigned int flags,
long timeout,
@ -3678,14 +3700,6 @@ static inline unsigned long nsecs_to_jiffies_timeout(const u64 n)
return min_t(u64, MAX_JIFFY_OFFSET, nsecs_to_jiffies64(n) + 1);
}
static inline unsigned long
timespec_to_jiffies_timeout(const struct timespec *value)
{
unsigned long j = timespec_to_jiffies(value);
return min_t(unsigned long, MAX_JIFFY_OFFSET, j + 1);
}
/*
* If you need to wait X milliseconds between events A and B, but event B
* doesn't happen exactly after event A, you record the timestamp (jiffies) of

View file

@ -1995,7 +1995,7 @@ compute_partial_view(struct drm_i915_gem_object *obj,
* The current feature set supported by i915_gem_fault() and thus GTT mmaps
* is exposed via I915_PARAM_MMAP_GTT_VERSION (see i915_gem_mmap_gtt_version).
*/
int i915_gem_fault(struct vm_fault *vmf)
vm_fault_t i915_gem_fault(struct vm_fault *vmf)
{
#define MIN_CHUNK_PAGES (SZ_1M >> PAGE_SHIFT)
struct vm_area_struct *area = vmf->vma;
@ -2112,10 +2112,8 @@ int i915_gem_fault(struct vm_fault *vmf)
* fail). But any other -EIO isn't ours (e.g. swap in failure)
* and so needs to be reported.
*/
if (!i915_terminally_wedged(&dev_priv->gpu_error)) {
ret = VM_FAULT_SIGBUS;
break;
}
if (!i915_terminally_wedged(&dev_priv->gpu_error))
return VM_FAULT_SIGBUS;
case -EAGAIN:
/*
* EAGAIN means the gpu is hung and we'll wait for the error
@ -2130,21 +2128,16 @@ int i915_gem_fault(struct vm_fault *vmf)
* EBUSY is ok: this just means that another thread
* already did the job.
*/
ret = VM_FAULT_NOPAGE;
break;
return VM_FAULT_NOPAGE;
case -ENOMEM:
ret = VM_FAULT_OOM;
break;
return VM_FAULT_OOM;
case -ENOSPC:
case -EFAULT:
ret = VM_FAULT_SIGBUS;
break;
return VM_FAULT_SIGBUS;
default:
WARN_ONCE(ret, "unhandled error in i915_gem_fault: %i\n", ret);
ret = VM_FAULT_SIGBUS;
break;
return VM_FAULT_SIGBUS;
}
return ret;
}
static void __i915_gem_object_release_mmap(struct drm_i915_gem_object *obj)
@ -2408,29 +2401,15 @@ static void __i915_gem_object_reset_page_iter(struct drm_i915_gem_object *obj)
rcu_read_unlock();
}
void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
enum i915_mm_subclass subclass)
static struct sg_table *
__i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct sg_table *pages;
if (i915_gem_object_has_pinned_pages(obj))
return;
GEM_BUG_ON(obj->bind_count);
if (!i915_gem_object_has_pages(obj))
return;
/* May be called by shrinker from within get_pages() (on another bo) */
mutex_lock_nested(&obj->mm.lock, subclass);
if (unlikely(atomic_read(&obj->mm.pages_pin_count)))
goto unlock;
/* ->put_pages might need to allocate memory for the bit17 swizzle
* array, hence protect them from being reaped by removing them from gtt
* lists early. */
pages = fetch_and_zero(&obj->mm.pages);
GEM_BUG_ON(!pages);
if (!pages)
return NULL;
spin_lock(&i915->mm.obj_lock);
list_del(&obj->mm.link);
@ -2449,12 +2428,37 @@ void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
}
__i915_gem_object_reset_page_iter(obj);
obj->mm.page_sizes.phys = obj->mm.page_sizes.sg = 0;
return pages;
}
void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
enum i915_mm_subclass subclass)
{
struct sg_table *pages;
if (i915_gem_object_has_pinned_pages(obj))
return;
GEM_BUG_ON(obj->bind_count);
if (!i915_gem_object_has_pages(obj))
return;
/* May be called by shrinker from within get_pages() (on another bo) */
mutex_lock_nested(&obj->mm.lock, subclass);
if (unlikely(atomic_read(&obj->mm.pages_pin_count)))
goto unlock;
/*
* ->put_pages might need to allocate memory for the bit17 swizzle
* array, hence protect them from being reaped by removing them from gtt
* lists early.
*/
pages = __i915_gem_object_unset_pages(obj);
if (!IS_ERR(pages))
obj->ops->put_pages(obj, pages);
obj->mm.page_sizes.phys = obj->mm.page_sizes.sg = 0;
unlock:
mutex_unlock(&obj->mm.lock);
}
@ -2937,32 +2941,54 @@ i915_gem_object_pwrite_gtt(struct drm_i915_gem_object *obj,
return 0;
}
static void i915_gem_client_mark_guilty(struct drm_i915_file_private *file_priv,
const struct i915_gem_context *ctx)
{
unsigned int score;
unsigned long prev_hang;
if (i915_gem_context_is_banned(ctx))
score = I915_CLIENT_SCORE_CONTEXT_BAN;
else
score = 0;
prev_hang = xchg(&file_priv->hang_timestamp, jiffies);
if (time_before(jiffies, prev_hang + I915_CLIENT_FAST_HANG_JIFFIES))
score += I915_CLIENT_SCORE_HANG_FAST;
if (score) {
atomic_add(score, &file_priv->ban_score);
DRM_DEBUG_DRIVER("client %s: gained %u ban score, now %u\n",
ctx->name, score,
atomic_read(&file_priv->ban_score));
}
}
static void i915_gem_context_mark_guilty(struct i915_gem_context *ctx)
{
bool banned;
unsigned int score;
bool banned, bannable;
atomic_inc(&ctx->guilty_count);
banned = false;
if (i915_gem_context_is_bannable(ctx)) {
unsigned int score;
bannable = i915_gem_context_is_bannable(ctx);
score = atomic_add_return(CONTEXT_SCORE_GUILTY, &ctx->ban_score);
banned = score >= CONTEXT_SCORE_BAN_THRESHOLD;
score = atomic_add_return(CONTEXT_SCORE_GUILTY,
&ctx->ban_score);
banned = score >= CONTEXT_SCORE_BAN_THRESHOLD;
DRM_DEBUG_DRIVER("context %s marked guilty (score %d) banned? %s\n",
ctx->name, score, yesno(banned));
}
if (!banned)
/* Cool contexts don't accumulate client ban score */
if (!bannable)
return;
i915_gem_context_set_banned(ctx);
if (!IS_ERR_OR_NULL(ctx->file_priv)) {
atomic_inc(&ctx->file_priv->context_bans);
DRM_DEBUG_DRIVER("client %s has had %d context banned\n",
ctx->name, atomic_read(&ctx->file_priv->context_bans));
if (banned) {
DRM_DEBUG_DRIVER("context %s: guilty %d, score %u, banned\n",
ctx->name, atomic_read(&ctx->guilty_count),
score);
i915_gem_context_set_banned(ctx);
}
if (!IS_ERR_OR_NULL(ctx->file_priv))
i915_gem_client_mark_guilty(ctx->file_priv, ctx);
}
static void i915_gem_context_mark_innocent(struct i915_gem_context *ctx)
@ -3140,15 +3166,17 @@ i915_gem_reset_request(struct intel_engine_cs *engine,
*/
request = i915_gem_find_active_request(engine);
if (request) {
unsigned long flags;
i915_gem_context_mark_innocent(request->gem_context);
dma_fence_set_error(&request->fence, -EAGAIN);
/* Rewind the engine to replay the incomplete rq */
spin_lock_irq(&engine->timeline.lock);
spin_lock_irqsave(&engine->timeline.lock, flags);
request = list_prev_entry(request, link);
if (&request->link == &engine->timeline.requests)
request = NULL;
spin_unlock_irq(&engine->timeline.lock);
spin_unlock_irqrestore(&engine->timeline.lock, flags);
}
}
@ -3209,7 +3237,7 @@ void i915_gem_reset(struct drm_i915_private *dev_priv,
rq = i915_request_alloc(engine,
dev_priv->kernel_context);
if (!IS_ERR(rq))
__i915_request_add(rq, false);
i915_request_add(rq);
}
}
@ -4962,8 +4990,7 @@ void __i915_gem_object_release_unless_active(struct drm_i915_gem_object *obj)
void i915_gem_sanitize(struct drm_i915_private *i915)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
int err;
GEM_TRACE("\n");
@ -4989,14 +5016,11 @@ void i915_gem_sanitize(struct drm_i915_private *i915)
* it may impact the display and we are uncertain about the stability
* of the reset, so this could be applied to even earlier gen.
*/
err = -ENODEV;
if (INTEL_GEN(i915) >= 5 && intel_has_gpu_reset(i915))
WARN_ON(intel_gpu_reset(i915, ALL_ENGINES));
/* Reset the submission backend after resume as well as the GPU reset */
for_each_engine(engine, i915, id) {
if (engine->reset.reset)
engine->reset.reset(engine, NULL);
}
err = WARN_ON(intel_gpu_reset(i915, ALL_ENGINES));
if (!err)
intel_engines_sanitize(i915);
intel_uncore_forcewake_put(i915, FORCEWAKE_ALL);
intel_runtime_pm_put(i915);
@ -5328,7 +5352,7 @@ static int __intel_engines_record_defaults(struct drm_i915_private *i915)
if (engine->init_context)
err = engine->init_context(rq);
__i915_request_add(rq, true);
i915_request_add(rq);
if (err)
goto err_active;
}
@ -5514,8 +5538,12 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
* driver doesn't explode during runtime.
*/
err_init_hw:
i915_gem_wait_for_idle(dev_priv, I915_WAIT_LOCKED);
i915_gem_contexts_lost(dev_priv);
mutex_unlock(&dev_priv->drm.struct_mutex);
WARN_ON(i915_gem_suspend(dev_priv));
i915_gem_suspend_late(dev_priv);
mutex_lock(&dev_priv->drm.struct_mutex);
intel_uc_fini_hw(dev_priv);
err_uc_init:
intel_uc_fini(dev_priv);
@ -5544,7 +5572,8 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
* for all other failure, such as an allocation failure, bail.
*/
if (!i915_terminally_wedged(&dev_priv->gpu_error)) {
DRM_ERROR("Failed to initialize GPU, declaring it wedged\n");
i915_load_error(dev_priv,
"Failed to initialize GPU, declaring it wedged!\n");
i915_gem_set_wedged(dev_priv);
}
ret = 0;
@ -5811,6 +5840,7 @@ int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
INIT_LIST_HEAD(&file_priv->mm.request_list);
file_priv->bsd_engine = -1;
file_priv->hang_timestamp = jiffies;
ret = i915_gem_context_open(i915, file);
if (ret)
@ -6091,16 +6121,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
goto err_unlock;
}
pages = fetch_and_zero(&obj->mm.pages);
if (pages) {
struct drm_i915_private *i915 = to_i915(obj->base.dev);
__i915_gem_object_reset_page_iter(obj);
spin_lock(&i915->mm.obj_lock);
list_del(&obj->mm.link);
spin_unlock(&i915->mm.obj_lock);
}
pages = __i915_gem_object_unset_pages(obj);
obj->ops = &i915_gem_phys_ops;
@ -6118,7 +6139,11 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
err_xfer:
obj->ops = &i915_gem_object_ops;
obj->mm.pages = pages;
if (!IS_ERR_OR_NULL(pages)) {
unsigned int sg_page_sizes = i915_sg_page_sizes(pages->sgl);
__i915_gem_object_set_pages(obj, pages, sg_page_sizes);
}
err_unlock:
mutex_unlock(&obj->mm.lock);
return err;

View file

@ -700,14 +700,7 @@ int i915_gem_switch_to_kernel_context(struct drm_i915_private *i915)
i915_timeline_sync_set(rq->timeline, &prev->fence);
}
/*
* Force a flush after the switch to ensure that all rendering
* and operations prior to switching to the kernel context hits
* memory. This should be guaranteed by the previous request,
* but an extra layer of paranoia before we declare the system
* idle (on suspend etc) is advisable!
*/
__i915_request_add(rq, true);
i915_request_add(rq);
}
return 0;
@ -715,7 +708,7 @@ int i915_gem_switch_to_kernel_context(struct drm_i915_private *i915)
static bool client_is_banned(struct drm_i915_file_private *file_priv)
{
return atomic_read(&file_priv->context_bans) > I915_MAX_CLIENT_CONTEXT_BANS;
return atomic_read(&file_priv->ban_score) >= I915_CLIENT_SCORE_BANNED;
}
int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,

View file

@ -489,7 +489,9 @@ eb_validate_vma(struct i915_execbuffer *eb,
}
static int
eb_add_vma(struct i915_execbuffer *eb, unsigned int i, struct i915_vma *vma)
eb_add_vma(struct i915_execbuffer *eb,
unsigned int i, unsigned batch_idx,
struct i915_vma *vma)
{
struct drm_i915_gem_exec_object2 *entry = &eb->exec[i];
int err;
@ -522,6 +524,24 @@ eb_add_vma(struct i915_execbuffer *eb, unsigned int i, struct i915_vma *vma)
eb->flags[i] = entry->flags;
vma->exec_flags = &eb->flags[i];
/*
* SNA is doing fancy tricks with compressing batch buffers, which leads
* to negative relocation deltas. Usually that works out ok since the
* relocate address is still positive, except when the batch is placed
* very low in the GTT. Ensure this doesn't happen.
*
* Note that actual hangs have only been observed on gen7, but for
* paranoia do it everywhere.
*/
if (i == batch_idx) {
if (!(eb->flags[i] & EXEC_OBJECT_PINNED))
eb->flags[i] |= __EXEC_OBJECT_NEEDS_BIAS;
if (eb->reloc_cache.has_fence)
eb->flags[i] |= EXEC_OBJECT_NEEDS_FENCE;
eb->batch = vma;
}
err = 0;
if (eb_pin_vma(eb, entry, vma)) {
if (entry->offset != vma->node.start) {
@ -716,7 +736,7 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
{
struct radix_tree_root *handles_vma = &eb->ctx->handles_vma;
struct drm_i915_gem_object *obj;
unsigned int i;
unsigned int i, batch;
int err;
if (unlikely(i915_gem_context_is_closed(eb->ctx)))
@ -728,6 +748,8 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
INIT_LIST_HEAD(&eb->relocs);
INIT_LIST_HEAD(&eb->unbound);
batch = eb_batch_index(eb);
for (i = 0; i < eb->buffer_count; i++) {
u32 handle = eb->exec[i].handle;
struct i915_lut_handle *lut;
@ -770,33 +792,16 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
lut->handle = handle;
add_vma:
err = eb_add_vma(eb, i, vma);
err = eb_add_vma(eb, i, batch, vma);
if (unlikely(err))
goto err_vma;
GEM_BUG_ON(vma != eb->vma[i]);
GEM_BUG_ON(vma->exec_flags != &eb->flags[i]);
GEM_BUG_ON(drm_mm_node_allocated(&vma->node) &&
eb_vma_misplaced(&eb->exec[i], vma, eb->flags[i]));
}
/* take note of the batch buffer before we might reorder the lists */
i = eb_batch_index(eb);
eb->batch = eb->vma[i];
GEM_BUG_ON(eb->batch->exec_flags != &eb->flags[i]);
/*
* SNA is doing fancy tricks with compressing batch buffers, which leads
* to negative relocation deltas. Usually that works out ok since the
* relocate address is still positive, except when the batch is placed
* very low in the GTT. Ensure this doesn't happen.
*
* Note that actual hangs have only been observed on gen7, but for
* paranoia do it everywhere.
*/
if (!(eb->flags[i] & EXEC_OBJECT_PINNED))
eb->flags[i] |= __EXEC_OBJECT_NEEDS_BIAS;
if (eb->reloc_cache.has_fence)
eb->flags[i] |= EXEC_OBJECT_NEEDS_FENCE;
eb->args->flags |= __EXEC_VALIDATED;
return eb_reserve(eb);
@ -916,7 +921,7 @@ static void reloc_gpu_flush(struct reloc_cache *cache)
i915_gem_object_unpin_map(cache->rq->batch->obj);
i915_gem_chipset_flush(cache->rq->i915);
__i915_request_add(cache->rq, true);
i915_request_add(cache->rq);
cache->rq = NULL;
}
@ -2433,7 +2438,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
trace_i915_request_queue(eb.request, eb.batch_flags);
err = eb_submit(&eb);
err_request:
__i915_request_add(eb.request, err == 0);
i915_request_add(eb.request);
add_to_client(eb.request, file);
if (fences)

File diff suppressed because it is too large Load diff

View file

@ -58,6 +58,7 @@
struct drm_i915_file_private;
struct drm_i915_fence_reg;
struct i915_vma;
typedef u32 gen6_pte_t;
typedef u64 gen8_pte_t;
@ -254,6 +255,21 @@ struct i915_pml4 {
struct i915_page_directory_pointer *pdps[GEN8_PML4ES_PER_PML4];
};
struct i915_vma_ops {
/* Map an object into an address space with the given cache flags. */
int (*bind_vma)(struct i915_vma *vma,
enum i915_cache_level cache_level,
u32 flags);
/*
* Unmap an object from an address space. This usually consists of
* setting the valid PTE entries to a reserved scratch page.
*/
void (*unbind_vma)(struct i915_vma *vma);
int (*set_pages)(struct i915_vma *vma);
void (*clear_pages)(struct i915_vma *vma);
};
struct i915_address_space {
struct drm_mm mm;
struct drm_i915_private *i915;
@ -331,15 +347,8 @@ struct i915_address_space {
enum i915_cache_level cache_level,
u32 flags);
void (*cleanup)(struct i915_address_space *vm);
/** Unmap an object from an address space. This usually consists of
* setting the valid PTE entries to a reserved scratch page. */
void (*unbind_vma)(struct i915_vma *vma);
/* Map an object into an address space with the given cache flags. */
int (*bind_vma)(struct i915_vma *vma,
enum i915_cache_level cache_level,
u32 flags);
int (*set_pages)(struct i915_vma *vma);
void (*clear_pages)(struct i915_vma *vma);
struct i915_vma_ops vma_ops;
I915_SELFTEST_DECLARE(struct fault_attr fault_attr);
I915_SELFTEST_DECLARE(bool scrub_64K);
@ -387,7 +396,7 @@ struct i915_ggtt {
struct i915_hw_ppgtt {
struct i915_address_space vm;
struct kref ref;
struct drm_mm_node node;
unsigned long pd_dirty_rings;
union {
struct i915_pml4 pml4; /* GEN8+ & 48b PPGTT */
@ -395,13 +404,28 @@ struct i915_hw_ppgtt {
struct i915_page_directory pd; /* GEN6-7 */
};
gen6_pte_t __iomem *pd_addr;
int (*switch_mm)(struct i915_hw_ppgtt *ppgtt,
struct i915_request *rq);
void (*debug_dump)(struct i915_hw_ppgtt *ppgtt, struct seq_file *m);
};
struct gen6_hw_ppgtt {
struct i915_hw_ppgtt base;
struct i915_vma *vma;
gen6_pte_t __iomem *pd_addr;
gen6_pte_t scratch_pte;
unsigned int pin_count;
bool scan_for_unused_pt;
};
#define __to_gen6_ppgtt(base) container_of(base, struct gen6_hw_ppgtt, base)
static inline struct gen6_hw_ppgtt *to_gen6_ppgtt(struct i915_hw_ppgtt *base)
{
BUILD_BUG_ON(offsetof(struct gen6_hw_ppgtt, base));
return __to_gen6_ppgtt(base);
}
/*
* gen6_for_each_pde() iterates over every pde from start until start+length.
* If start and start+length are not perfectly divisible, the macro will round
@ -440,8 +464,8 @@ static inline u32 i915_pte_count(u64 addr, u64 length, unsigned int pde_shift)
const u64 mask = ~((1ULL << pde_shift) - 1);
u64 end;
WARN_ON(length == 0);
WARN_ON(offset_in_page(addr|length));
GEM_BUG_ON(length == 0);
GEM_BUG_ON(offset_in_page(addr | length));
end = addr + length;
@ -605,6 +629,9 @@ static inline void i915_ppgtt_put(struct i915_hw_ppgtt *ppgtt)
kref_put(&ppgtt->ref, i915_ppgtt_release);
}
int gen6_ppgtt_pin(struct i915_hw_ppgtt *base);
void gen6_ppgtt_unpin(struct i915_hw_ppgtt *base);
void i915_check_and_clear_faults(struct drm_i915_private *dev_priv);
void i915_gem_suspend_gtt_mappings(struct drm_i915_private *dev_priv);
void i915_gem_restore_gtt_mappings(struct drm_i915_private *dev_priv);

View file

@ -1050,6 +1050,9 @@ static u32 capture_error_bo(struct drm_i915_error_buffer *err,
int i = 0;
list_for_each_entry(vma, head, vm_link) {
if (!vma->obj)
continue;
if (pinned_only && !i915_vma_is_pinned(vma))
continue;

View file

@ -115,6 +115,13 @@ static const u32 hpd_bxt[HPD_NUM_PINS] = {
[HPD_PORT_C] = BXT_DE_PORT_HP_DDIC
};
static const u32 hpd_gen11[HPD_NUM_PINS] = {
[HPD_PORT_C] = GEN11_TC1_HOTPLUG | GEN11_TBT1_HOTPLUG,
[HPD_PORT_D] = GEN11_TC2_HOTPLUG | GEN11_TBT2_HOTPLUG,
[HPD_PORT_E] = GEN11_TC3_HOTPLUG | GEN11_TBT3_HOTPLUG,
[HPD_PORT_F] = GEN11_TC4_HOTPLUG | GEN11_TBT4_HOTPLUG
};
/* IIR can theoretically queue up two events. Be paranoid. */
#define GEN8_IRQ_RESET_NDX(type, which) do { \
I915_WRITE(GEN8_##type##_IMR(which), 0xffffffff); \
@ -1549,6 +1556,22 @@ static void gen8_gt_irq_handler(struct drm_i915_private *i915,
}
}
static bool gen11_port_hotplug_long_detect(enum port port, u32 val)
{
switch (port) {
case PORT_C:
return val & GEN11_HOTPLUG_CTL_LONG_DETECT(PORT_TC1);
case PORT_D:
return val & GEN11_HOTPLUG_CTL_LONG_DETECT(PORT_TC2);
case PORT_E:
return val & GEN11_HOTPLUG_CTL_LONG_DETECT(PORT_TC3);
case PORT_F:
return val & GEN11_HOTPLUG_CTL_LONG_DETECT(PORT_TC4);
default:
return false;
}
}
static bool bxt_port_hotplug_long_detect(enum port port, u32 val)
{
switch (port) {
@ -1893,9 +1916,17 @@ static void i9xx_pipestat_irq_ack(struct drm_i915_private *dev_priv,
/*
* Clear the PIPE*STAT regs before the IIR
*
* Toggle the enable bits to make sure we get an
* edge in the ISR pipe event bit if we don't clear
* all the enabled status bits. Otherwise the edge
* triggered IIR on i965/g4x wouldn't notice that
* an interrupt is still pending.
*/
if (pipe_stats[pipe])
I915_WRITE(reg, enable_mask | pipe_stats[pipe]);
if (pipe_stats[pipe]) {
I915_WRITE(reg, pipe_stats[pipe]);
I915_WRITE(reg, enable_mask);
}
}
spin_unlock(&dev_priv->irq_lock);
}
@ -2590,6 +2621,40 @@ static void bxt_hpd_irq_handler(struct drm_i915_private *dev_priv,
intel_hpd_irq_handler(dev_priv, pin_mask, long_mask);
}
static void gen11_hpd_irq_handler(struct drm_i915_private *dev_priv, u32 iir)
{
u32 pin_mask = 0, long_mask = 0;
u32 trigger_tc = iir & GEN11_DE_TC_HOTPLUG_MASK;
u32 trigger_tbt = iir & GEN11_DE_TBT_HOTPLUG_MASK;
if (trigger_tc) {
u32 dig_hotplug_reg;
dig_hotplug_reg = I915_READ(GEN11_TC_HOTPLUG_CTL);
I915_WRITE(GEN11_TC_HOTPLUG_CTL, dig_hotplug_reg);
intel_get_hpd_pins(dev_priv, &pin_mask, &long_mask, trigger_tc,
dig_hotplug_reg, hpd_gen11,
gen11_port_hotplug_long_detect);
}
if (trigger_tbt) {
u32 dig_hotplug_reg;
dig_hotplug_reg = I915_READ(GEN11_TBT_HOTPLUG_CTL);
I915_WRITE(GEN11_TBT_HOTPLUG_CTL, dig_hotplug_reg);
intel_get_hpd_pins(dev_priv, &pin_mask, &long_mask, trigger_tbt,
dig_hotplug_reg, hpd_gen11,
gen11_port_hotplug_long_detect);
}
if (pin_mask)
intel_hpd_irq_handler(dev_priv, pin_mask, long_mask);
else
DRM_ERROR("Unexpected DE HPD interrupt 0x%08x\n", iir);
}
static irqreturn_t
gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl)
{
@ -2625,6 +2690,17 @@ gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl)
DRM_ERROR("The master control interrupt lied (DE MISC)!\n");
}
if (INTEL_GEN(dev_priv) >= 11 && (master_ctl & GEN11_DE_HPD_IRQ)) {
iir = I915_READ(GEN11_DE_HPD_IIR);
if (iir) {
I915_WRITE(GEN11_DE_HPD_IIR, iir);
ret = IRQ_HANDLED;
gen11_hpd_irq_handler(dev_priv, iir);
} else {
DRM_ERROR("The master control interrupt lied, (DE HPD)!\n");
}
}
if (master_ctl & GEN8_DE_PORT_IRQ) {
iir = I915_READ(GEN8_DE_PORT_IIR);
if (iir) {
@ -2640,6 +2716,9 @@ gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl)
GEN9_AUX_CHANNEL_C |
GEN9_AUX_CHANNEL_D;
if (INTEL_GEN(dev_priv) >= 11)
tmp_mask |= ICL_AUX_CHANNEL_E;
if (IS_CNL_WITH_PORT_F(dev_priv) ||
INTEL_GEN(dev_priv) >= 11)
tmp_mask |= CNL_AUX_CHANNEL_F;
@ -2943,11 +3022,44 @@ gen11_gt_irq_handler(struct drm_i915_private * const i915,
spin_unlock(&i915->irq_lock);
}
static void
gen11_gu_misc_irq_ack(struct drm_i915_private *dev_priv, const u32 master_ctl,
u32 *iir)
{
void __iomem * const regs = dev_priv->regs;
if (!(master_ctl & GEN11_GU_MISC_IRQ))
return;
*iir = raw_reg_read(regs, GEN11_GU_MISC_IIR);
if (likely(*iir))
raw_reg_write(regs, GEN11_GU_MISC_IIR, *iir);
}
static void
gen11_gu_misc_irq_handler(struct drm_i915_private *dev_priv,
const u32 master_ctl, const u32 iir)
{
if (!(master_ctl & GEN11_GU_MISC_IRQ))
return;
if (unlikely(!iir)) {
DRM_ERROR("GU_MISC iir blank!\n");
return;
}
if (iir & GEN11_GU_MISC_GSE)
intel_opregion_asle_intr(dev_priv);
else
DRM_ERROR("Unexpected GU_MISC interrupt 0x%x\n", iir);
}
static irqreturn_t gen11_irq_handler(int irq, void *arg)
{
struct drm_i915_private * const i915 = to_i915(arg);
void __iomem * const regs = i915->regs;
u32 master_ctl;
u32 gu_misc_iir;
if (!intel_irqs_enabled(i915))
return IRQ_NONE;
@ -2976,9 +3088,13 @@ static irqreturn_t gen11_irq_handler(int irq, void *arg)
enable_rpm_wakeref_asserts(i915);
}
gen11_gu_misc_irq_ack(i915, master_ctl, &gu_misc_iir);
/* Acknowledge and enable interrupts. */
raw_reg_write(regs, GEN11_GFX_MSTR_IRQ, GEN11_MASTER_IRQ | master_ctl);
gen11_gu_misc_irq_handler(i915, master_ctl, gu_misc_iir);
return IRQ_HANDLED;
}
@ -3465,6 +3581,8 @@ static void gen11_irq_reset(struct drm_device *dev)
GEN3_IRQ_RESET(GEN8_DE_PORT_);
GEN3_IRQ_RESET(GEN8_DE_MISC_);
GEN3_IRQ_RESET(GEN11_DE_HPD_);
GEN3_IRQ_RESET(GEN11_GU_MISC_);
GEN3_IRQ_RESET(GEN8_PCU_);
}
@ -3582,6 +3700,41 @@ static void ibx_hpd_irq_setup(struct drm_i915_private *dev_priv)
ibx_hpd_detection_setup(dev_priv);
}
static void gen11_hpd_detection_setup(struct drm_i915_private *dev_priv)
{
u32 hotplug;
hotplug = I915_READ(GEN11_TC_HOTPLUG_CTL);
hotplug |= GEN11_HOTPLUG_CTL_ENABLE(PORT_TC1) |
GEN11_HOTPLUG_CTL_ENABLE(PORT_TC2) |
GEN11_HOTPLUG_CTL_ENABLE(PORT_TC3) |
GEN11_HOTPLUG_CTL_ENABLE(PORT_TC4);
I915_WRITE(GEN11_TC_HOTPLUG_CTL, hotplug);
hotplug = I915_READ(GEN11_TBT_HOTPLUG_CTL);
hotplug |= GEN11_HOTPLUG_CTL_ENABLE(PORT_TC1) |
GEN11_HOTPLUG_CTL_ENABLE(PORT_TC2) |
GEN11_HOTPLUG_CTL_ENABLE(PORT_TC3) |
GEN11_HOTPLUG_CTL_ENABLE(PORT_TC4);
I915_WRITE(GEN11_TBT_HOTPLUG_CTL, hotplug);
}
static void gen11_hpd_irq_setup(struct drm_i915_private *dev_priv)
{
u32 hotplug_irqs, enabled_irqs;
u32 val;
enabled_irqs = intel_hpd_enabled_irqs(dev_priv, hpd_gen11);
hotplug_irqs = GEN11_DE_TC_HOTPLUG_MASK | GEN11_DE_TBT_HOTPLUG_MASK;
val = I915_READ(GEN11_DE_HPD_IMR);
val &= ~hotplug_irqs;
I915_WRITE(GEN11_DE_HPD_IMR, val);
POSTING_READ(GEN11_DE_HPD_IMR);
gen11_hpd_detection_setup(dev_priv);
}
static void spt_hpd_detection_setup(struct drm_i915_private *dev_priv)
{
u32 val, hotplug;
@ -3908,9 +4061,12 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)
uint32_t de_pipe_enables;
u32 de_port_masked = GEN8_AUX_CHANNEL_A;
u32 de_port_enables;
u32 de_misc_masked = GEN8_DE_MISC_GSE | GEN8_DE_EDP_PSR;
u32 de_misc_masked = GEN8_DE_EDP_PSR;
enum pipe pipe;
if (INTEL_GEN(dev_priv) <= 10)
de_misc_masked |= GEN8_DE_MISC_GSE;
if (INTEL_GEN(dev_priv) >= 9) {
de_pipe_masked |= GEN9_DE_PIPE_IRQ_FAULT_ERRORS;
de_port_masked |= GEN9_AUX_CHANNEL_B | GEN9_AUX_CHANNEL_C |
@ -3921,6 +4077,9 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)
de_pipe_masked |= GEN8_DE_PIPE_IRQ_FAULT_ERRORS;
}
if (INTEL_GEN(dev_priv) >= 11)
de_port_masked |= ICL_AUX_CHANNEL_E;
if (IS_CNL_WITH_PORT_F(dev_priv) || INTEL_GEN(dev_priv) >= 11)
de_port_masked |= CNL_AUX_CHANNEL_F;
@ -3949,10 +4108,18 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)
GEN3_IRQ_INIT(GEN8_DE_PORT_, ~de_port_masked, de_port_enables);
GEN3_IRQ_INIT(GEN8_DE_MISC_, ~de_misc_masked, de_misc_masked);
if (IS_GEN9_LP(dev_priv))
if (INTEL_GEN(dev_priv) >= 11) {
u32 de_hpd_masked = 0;
u32 de_hpd_enables = GEN11_DE_TC_HOTPLUG_MASK |
GEN11_DE_TBT_HOTPLUG_MASK;
GEN3_IRQ_INIT(GEN11_DE_HPD_, ~de_hpd_masked, de_hpd_enables);
gen11_hpd_detection_setup(dev_priv);
} else if (IS_GEN9_LP(dev_priv)) {
bxt_hpd_detection_setup(dev_priv);
else if (IS_BROADWELL(dev_priv))
} else if (IS_BROADWELL(dev_priv)) {
ilk_hpd_detection_setup(dev_priv);
}
}
static int gen8_irq_postinstall(struct drm_device *dev)
@ -4004,10 +4171,13 @@ static void gen11_gt_irq_postinstall(struct drm_i915_private *dev_priv)
static int gen11_irq_postinstall(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
u32 gu_misc_masked = GEN11_GU_MISC_GSE;
gen11_gt_irq_postinstall(dev_priv);
gen8_de_irq_postinstall(dev_priv);
GEN3_IRQ_INIT(GEN11_GU_MISC_, ~gu_misc_masked, gu_misc_masked);
I915_WRITE(GEN11_DISPLAY_INT_CTL, GEN11_DISPLAY_IRQ_ENABLE);
I915_WRITE(GEN11_GFX_MSTR_IRQ, GEN11_MASTER_IRQ);
@ -4471,7 +4641,7 @@ void intel_irq_init(struct drm_i915_private *dev_priv)
dev->driver->irq_uninstall = gen11_irq_reset;
dev->driver->enable_vblank = gen8_enable_vblank;
dev->driver->disable_vblank = gen8_disable_vblank;
dev_priv->display.hpd_irq_setup = spt_hpd_irq_setup;
dev_priv->display.hpd_irq_setup = gen11_hpd_irq_setup;
} else if (INTEL_GEN(dev_priv) >= 8) {
dev->driver->irq_handler = gen8_irq_handler;
dev->driver->irq_preinstall = gen8_irq_reset;

View file

@ -657,12 +657,15 @@ static const struct pci_device_id pciidlist[] = {
INTEL_KBL_GT2_IDS(&intel_kabylake_gt2_info),
INTEL_KBL_GT3_IDS(&intel_kabylake_gt3_info),
INTEL_KBL_GT4_IDS(&intel_kabylake_gt3_info),
INTEL_AML_GT2_IDS(&intel_kabylake_gt2_info),
INTEL_CFL_S_GT1_IDS(&intel_coffeelake_gt1_info),
INTEL_CFL_S_GT2_IDS(&intel_coffeelake_gt2_info),
INTEL_CFL_H_GT2_IDS(&intel_coffeelake_gt2_info),
INTEL_CFL_U_GT1_IDS(&intel_coffeelake_gt1_info),
INTEL_CFL_U_GT2_IDS(&intel_coffeelake_gt2_info),
INTEL_CFL_U_GT3_IDS(&intel_coffeelake_gt3_info),
INTEL_WHL_U_GT1_IDS(&intel_coffeelake_gt1_info),
INTEL_WHL_U_GT2_IDS(&intel_coffeelake_gt2_info),
INTEL_WHL_U_GT3_IDS(&intel_coffeelake_gt3_info),
INTEL_CNL_IDS(&intel_cannonlake_info),
INTEL_ICL_11_IDS(&intel_icelake_11_info),
{0, 0, 0}

View file

@ -315,7 +315,7 @@ static u32 i915_oa_max_sample_rate = 100000;
* code assumes all reports have a power-of-two size and ~(size - 1) can
* be used as a mask to align the OA tail pointer.
*/
static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
static const struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
[I915_OA_FORMAT_A13] = { 0, 64 },
[I915_OA_FORMAT_A29] = { 1, 128 },
[I915_OA_FORMAT_A13_B8_C8] = { 2, 128 },
@ -326,7 +326,7 @@ static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
[I915_OA_FORMAT_C4_B8] = { 7, 64 },
};
static struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
static const struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
[I915_OA_FORMAT_A12] = { 0, 64 },
[I915_OA_FORMAT_A12_B8_C8] = { 2, 128 },
[I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
@ -1279,23 +1279,23 @@ static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
i915->perf.oa.specific_ctx_id_mask =
(1U << (GEN8_CTX_ID_WIDTH - 1)) - 1;
} else {
i915->perf.oa.specific_ctx_id = stream->ctx->hw_id;
i915->perf.oa.specific_ctx_id_mask =
(1U << GEN8_CTX_ID_WIDTH) - 1;
i915->perf.oa.specific_ctx_id =
upper_32_bits(ce->lrc_desc);
i915->perf.oa.specific_ctx_id &=
i915->perf.oa.specific_ctx_id_mask;
}
break;
case 11: {
struct intel_engine_cs *engine = i915->engine[RCS];
i915->perf.oa.specific_ctx_id =
stream->ctx->hw_id << (GEN11_SW_CTX_ID_SHIFT - 32) |
engine->instance << (GEN11_ENGINE_INSTANCE_SHIFT - 32) |
engine->class << (GEN11_ENGINE_INSTANCE_SHIFT - 32);
i915->perf.oa.specific_ctx_id_mask =
((1U << GEN11_SW_CTX_ID_WIDTH) - 1) << (GEN11_SW_CTX_ID_SHIFT - 32) |
((1U << GEN11_ENGINE_INSTANCE_WIDTH) - 1) << (GEN11_ENGINE_INSTANCE_SHIFT - 32) |
((1 << GEN11_ENGINE_CLASS_WIDTH) - 1) << (GEN11_ENGINE_CLASS_SHIFT - 32);
i915->perf.oa.specific_ctx_id = upper_32_bits(ce->lrc_desc);
i915->perf.oa.specific_ctx_id &=
i915->perf.oa.specific_ctx_id_mask;
break;
}

View file

@ -94,7 +94,10 @@ struct vgt_if {
u32 rsv5[4];
u32 g2v_notify;
u32 rsv6[7];
u32 rsv6[5];
u32 cursor_x_hot;
u32 cursor_y_hot;
struct {
u32 lo;

File diff suppressed because it is too large Load diff

View file

@ -817,6 +817,8 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
/* Keep a second pin for the dual retirement along engine and ring */
__intel_context_pin(ce);
rq->infix = rq->ring->emit; /* end of header; start of user payload */
/* Check that we didn't interrupt ourselves with a new request */
GEM_BUG_ON(rq->timeline->seqno != rq->fence.seqno);
return rq;
@ -1016,14 +1018,13 @@ i915_request_await_object(struct i915_request *to,
* request is not being tracked for completion but the work itself is
* going to happen on the hardware. This would be a Bad Thing(tm).
*/
void __i915_request_add(struct i915_request *request, bool flush_caches)
void i915_request_add(struct i915_request *request)
{
struct intel_engine_cs *engine = request->engine;
struct i915_timeline *timeline = request->timeline;
struct intel_ring *ring = request->ring;
struct i915_request *prev;
u32 *cs;
int err;
GEM_TRACE("%s fence %llx:%d\n",
engine->name, request->fence.context, request->fence.seqno);
@ -1044,20 +1045,7 @@ void __i915_request_add(struct i915_request *request, bool flush_caches)
* know that it is time to use that space up.
*/
request->reserved_space = 0;
/*
* Emit any outstanding flushes - execbuf can fail to emit the flush
* after having emitted the batchbuffer command. Hence we need to fix
* things up similar to emitting the lazy request. The difference here
* is that the flush _must_ happen before the next request, no matter
* what.
*/
if (flush_caches) {
err = engine->emit_flush(request, EMIT_FLUSH);
/* Not allowed to fail! */
WARN(err, "engine->emit_flush() failed: %d!\n", err);
}
engine->emit_flush(request, EMIT_FLUSH);
/*
* Record the position of the start of the breadcrumb so that

View file

@ -134,6 +134,9 @@ struct i915_request {
/** Position in the ring of the start of the request */
u32 head;
/** Position in the ring of the start of the user packets */
u32 infix;
/**
* Position in the ring of the start of the postfix.
* This is required to calculate the maximum available ring space
@ -250,9 +253,7 @@ int i915_request_await_object(struct i915_request *to,
int i915_request_await_dma_fence(struct i915_request *rq,
struct dma_fence *fence);
void __i915_request_add(struct i915_request *rq, bool flush_caches);
#define i915_request_add(rq) \
__i915_request_add(rq, false)
void i915_request_add(struct i915_request *rq);
void __i915_request_submit(struct i915_request *request);
void i915_request_submit(struct i915_request *request);

View file

@ -973,39 +973,6 @@ DEFINE_EVENT(i915_context, i915_context_free,
TP_ARGS(ctx)
);
/**
* DOC: switch_mm tracepoint
*
* This tracepoint allows tracking of the mm switch, which is an important point
* in the lifetime of the vm in the legacy submission path. This tracepoint is
* called only if full ppgtt is enabled.
*/
TRACE_EVENT(switch_mm,
TP_PROTO(struct intel_engine_cs *engine, struct i915_gem_context *to),
TP_ARGS(engine, to),
TP_STRUCT__entry(
__field(u16, class)
__field(u16, instance)
__field(struct i915_gem_context *, to)
__field(struct i915_address_space *, vm)
__field(u32, dev)
),
TP_fast_assign(
__entry->class = engine->uabi_class;
__entry->instance = engine->instance;
__entry->to = to;
__entry->vm = to->ppgtt ? &to->ppgtt->vm : NULL;
__entry->dev = engine->i915->drm.primary->index;
),
TP_printk("dev=%u, engine=%u:%u, ctx=%p, ctx_vm=%p",
__entry->dev, __entry->class, __entry->instance, __entry->to,
__entry->vm)
);
#endif /* _I915_TRACE_H_ */
/* This part must be outside protection */

View file

@ -95,6 +95,7 @@ vma_create(struct drm_i915_gem_object *obj,
init_request_active(&vma->last_read[i], i915_vma_retire);
init_request_active(&vma->last_fence, NULL);
vma->vm = vm;
vma->ops = &vm->vma_ops;
vma->obj = obj;
vma->resv = obj->resv;
vma->size = obj->base.size;
@ -280,7 +281,7 @@ int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
GEM_BUG_ON(!vma->pages);
trace_i915_vma_bind(vma, bind_flags);
ret = vma->vm->bind_vma(vma, cache_level, bind_flags);
ret = vma->ops->bind_vma(vma, cache_level, bind_flags);
if (ret)
return ret;
@ -345,7 +346,7 @@ void i915_vma_flush_writes(struct i915_vma *vma)
void i915_vma_unpin_iomap(struct i915_vma *vma)
{
lockdep_assert_held(&vma->obj->base.dev->struct_mutex);
lockdep_assert_held(&vma->vm->i915->drm.struct_mutex);
GEM_BUG_ON(vma->iomap == NULL);
@ -365,6 +366,7 @@ void i915_vma_unpin_and_release(struct i915_vma **p_vma)
return;
obj = vma->obj;
GEM_BUG_ON(!obj);
i915_vma_unpin(vma);
i915_vma_close(vma);
@ -489,7 +491,7 @@ static int
i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
{
struct drm_i915_private *dev_priv = vma->vm->i915;
struct drm_i915_gem_object *obj = vma->obj;
unsigned int cache_level;
u64 start, end;
int ret;
@ -524,20 +526,25 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
* attempt to find space.
*/
if (size > end) {
DRM_DEBUG("Attempting to bind an object larger than the aperture: request=%llu [object=%zd] > %s aperture=%llu\n",
size, obj->base.size,
flags & PIN_MAPPABLE ? "mappable" : "total",
DRM_DEBUG("Attempting to bind an object larger than the aperture: request=%llu > %s aperture=%llu\n",
size, flags & PIN_MAPPABLE ? "mappable" : "total",
end);
return -ENOSPC;
}
ret = i915_gem_object_pin_pages(obj);
if (ret)
return ret;
if (vma->obj) {
ret = i915_gem_object_pin_pages(vma->obj);
if (ret)
return ret;
cache_level = vma->obj->cache_level;
} else {
cache_level = 0;
}
GEM_BUG_ON(vma->pages);
ret = vma->vm->set_pages(vma);
ret = vma->ops->set_pages(vma);
if (ret)
goto err_unpin;
@ -550,7 +557,7 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
}
ret = i915_gem_gtt_reserve(vma->vm, &vma->node,
size, offset, obj->cache_level,
size, offset, cache_level,
flags);
if (ret)
goto err_clear;
@ -589,7 +596,7 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
}
ret = i915_gem_gtt_insert(vma->vm, &vma->node,
size, alignment, obj->cache_level,
size, alignment, cache_level,
start, end, flags);
if (ret)
goto err_clear;
@ -598,23 +605,28 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
GEM_BUG_ON(vma->node.start + vma->node.size > end);
}
GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
GEM_BUG_ON(!i915_gem_valid_gtt_space(vma, obj->cache_level));
GEM_BUG_ON(!i915_gem_valid_gtt_space(vma, cache_level));
list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
spin_lock(&dev_priv->mm.obj_lock);
list_move_tail(&obj->mm.link, &dev_priv->mm.bound_list);
obj->bind_count++;
spin_unlock(&dev_priv->mm.obj_lock);
if (vma->obj) {
struct drm_i915_gem_object *obj = vma->obj;
assert_bind_count(obj);
spin_lock(&dev_priv->mm.obj_lock);
list_move_tail(&obj->mm.link, &dev_priv->mm.bound_list);
obj->bind_count++;
spin_unlock(&dev_priv->mm.obj_lock);
assert_bind_count(obj);
}
return 0;
err_clear:
vma->vm->clear_pages(vma);
vma->ops->clear_pages(vma);
err_unpin:
i915_gem_object_unpin_pages(obj);
if (vma->obj)
i915_gem_object_unpin_pages(vma->obj);
return ret;
}
@ -622,30 +634,35 @@ static void
i915_vma_remove(struct i915_vma *vma)
{
struct drm_i915_private *i915 = vma->vm->i915;
struct drm_i915_gem_object *obj = vma->obj;
GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
GEM_BUG_ON(vma->flags & (I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND));
vma->vm->clear_pages(vma);
vma->ops->clear_pages(vma);
drm_mm_remove_node(&vma->node);
list_move_tail(&vma->vm_link, &vma->vm->unbound_list);
/* Since the unbound list is global, only move to that list if
/*
* Since the unbound list is global, only move to that list if
* no more VMAs exist.
*/
spin_lock(&i915->mm.obj_lock);
if (--obj->bind_count == 0)
list_move_tail(&obj->mm.link, &i915->mm.unbound_list);
spin_unlock(&i915->mm.obj_lock);
if (vma->obj) {
struct drm_i915_gem_object *obj = vma->obj;
/* And finally now the object is completely decoupled from this vma,
* we can drop its hold on the backing storage and allow it to be
* reaped by the shrinker.
*/
i915_gem_object_unpin_pages(obj);
assert_bind_count(obj);
spin_lock(&i915->mm.obj_lock);
if (--obj->bind_count == 0)
list_move_tail(&obj->mm.link, &i915->mm.unbound_list);
spin_unlock(&i915->mm.obj_lock);
/*
* And finally now the object is completely decoupled from this
* vma, we can drop its hold on the backing storage and allow
* it to be reaped by the shrinker.
*/
i915_gem_object_unpin_pages(obj);
assert_bind_count(obj);
}
}
int __i915_vma_do_pin(struct i915_vma *vma,
@ -670,7 +687,7 @@ int __i915_vma_do_pin(struct i915_vma *vma,
}
GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
ret = i915_vma_bind(vma, vma->obj->cache_level, flags);
ret = i915_vma_bind(vma, vma->obj ? vma->obj->cache_level : 0, flags);
if (ret)
goto err_remove;
@ -727,6 +744,7 @@ void i915_vma_reopen(struct i915_vma *vma)
static void __i915_vma_destroy(struct i915_vma *vma)
{
struct drm_i915_private *i915 = vma->vm->i915;
int i;
GEM_BUG_ON(vma->node.allocated);
@ -738,12 +756,13 @@ static void __i915_vma_destroy(struct i915_vma *vma)
list_del(&vma->obj_link);
list_del(&vma->vm_link);
rb_erase(&vma->obj_node, &vma->obj->vma_tree);
if (vma->obj)
rb_erase(&vma->obj_node, &vma->obj->vma_tree);
if (!i915_vma_is_ggtt(vma))
i915_ppgtt_put(i915_vm_to_ppgtt(vma->vm));
kmem_cache_free(to_i915(vma->obj->base.dev)->vmas, vma);
kmem_cache_free(i915->vmas, vma);
}
void i915_vma_destroy(struct i915_vma *vma)
@ -809,13 +828,13 @@ void i915_vma_revoke_mmap(struct i915_vma *vma)
int i915_vma_unbind(struct i915_vma *vma)
{
struct drm_i915_gem_object *obj = vma->obj;
unsigned long active;
int ret;
lockdep_assert_held(&obj->base.dev->struct_mutex);
lockdep_assert_held(&vma->vm->i915->drm.struct_mutex);
/* First wait upon any activity as retiring the request may
/*
* First wait upon any activity as retiring the request may
* have side-effects such as unpinning or even unbinding this vma.
*/
might_sleep();
@ -823,7 +842,8 @@ int i915_vma_unbind(struct i915_vma *vma)
if (active) {
int idx;
/* When a closed VMA is retired, it is unbound - eek.
/*
* When a closed VMA is retired, it is unbound - eek.
* In order to prevent it from being recursively closed,
* take a pin on the vma so that the second unbind is
* aborted.
@ -861,9 +881,6 @@ int i915_vma_unbind(struct i915_vma *vma)
if (!drm_mm_node_allocated(&vma->node))
return 0;
GEM_BUG_ON(obj->bind_count == 0);
GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj));
if (i915_vma_is_map_and_fenceable(vma)) {
/*
* Check that we have flushed all writes through the GGTT
@ -890,7 +907,7 @@ int i915_vma_unbind(struct i915_vma *vma)
if (likely(!vma->vm->closed)) {
trace_i915_vma_unbind(vma);
vma->vm->unbind_vma(vma);
vma->ops->unbind_vma(vma);
}
vma->flags &= ~(I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND);

View file

@ -49,10 +49,12 @@ struct i915_vma {
struct drm_mm_node node;
struct drm_i915_gem_object *obj;
struct i915_address_space *vm;
const struct i915_vma_ops *ops;
struct drm_i915_fence_reg *fence;
struct reservation_object *resv; /** Alias of obj->resv */
struct sg_table *pages;
void __iomem *iomap;
void *private; /* owned by creator */
u64 size;
u64 display_alignment;
struct i915_page_sizes page_sizes;
@ -339,6 +341,12 @@ static inline void i915_vma_unpin(struct i915_vma *vma)
__i915_vma_unpin(vma);
}
static inline bool i915_vma_is_bound(const struct i915_vma *vma,
unsigned int where)
{
return vma->flags & where;
}
/**
* i915_vma_pin_iomap - calls ioremap_wc to map the GGTT VMA via the aperture
* @vma: VMA to iomap
@ -407,7 +415,7 @@ static inline void __i915_vma_unpin_fence(struct i915_vma *vma)
static inline void
i915_vma_unpin_fence(struct i915_vma *vma)
{
lockdep_assert_held(&vma->obj->base.dev->struct_mutex);
/* lockdep_assert_held(&vma->vm->i915->drm.struct_mutex); */
if (vma->fence)
__i915_vma_unpin_fence(vma);
}

View file

@ -12,10 +12,6 @@
#define INTEL_DSM_REVISION_ID 1 /* For Calpella anyway... */
#define INTEL_DSM_FN_PLATFORM_MUX_INFO 1 /* No args */
static struct intel_dsm_priv {
acpi_handle dhandle;
} intel_dsm_priv;
static const guid_t intel_dsm_guid =
GUID_INIT(0x7ed873d3, 0xc2d0, 0x4e4f,
0xa8, 0x54, 0x0f, 0x13, 0x17, 0xb0, 0x1c, 0x2c);
@ -72,12 +68,12 @@ static char *intel_dsm_mux_type(u8 type)
}
}
static void intel_dsm_platform_mux_info(void)
static void intel_dsm_platform_mux_info(acpi_handle dhandle)
{
int i;
union acpi_object *pkg, *connector_count;
pkg = acpi_evaluate_dsm_typed(intel_dsm_priv.dhandle, &intel_dsm_guid,
pkg = acpi_evaluate_dsm_typed(dhandle, &intel_dsm_guid,
INTEL_DSM_REVISION_ID, INTEL_DSM_FN_PLATFORM_MUX_INFO,
NULL, ACPI_TYPE_PACKAGE);
if (!pkg) {
@ -107,41 +103,40 @@ static void intel_dsm_platform_mux_info(void)
ACPI_FREE(pkg);
}
static bool intel_dsm_pci_probe(struct pci_dev *pdev)
static acpi_handle intel_dsm_pci_probe(struct pci_dev *pdev)
{
acpi_handle dhandle;
dhandle = ACPI_HANDLE(&pdev->dev);
if (!dhandle)
return false;
return NULL;
if (!acpi_check_dsm(dhandle, &intel_dsm_guid, INTEL_DSM_REVISION_ID,
1 << INTEL_DSM_FN_PLATFORM_MUX_INFO)) {
DRM_DEBUG_KMS("no _DSM method for intel device\n");
return false;
return NULL;
}
intel_dsm_priv.dhandle = dhandle;
intel_dsm_platform_mux_info();
intel_dsm_platform_mux_info(dhandle);
return true;
return dhandle;
}
static bool intel_dsm_detect(void)
{
acpi_handle dhandle = NULL;
char acpi_method_name[255] = { 0 };
struct acpi_buffer buffer = {sizeof(acpi_method_name), acpi_method_name};
struct pci_dev *pdev = NULL;
bool has_dsm = false;
int vga_count = 0;
while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) {
vga_count++;
has_dsm |= intel_dsm_pci_probe(pdev);
dhandle = intel_dsm_pci_probe(pdev) ?: dhandle;
}
if (vga_count == 2 && has_dsm) {
acpi_get_name(intel_dsm_priv.dhandle, ACPI_FULL_PATHNAME, &buffer);
if (vga_count == 2 && dhandle) {
acpi_get_name(dhandle, ACPI_FULL_PATHNAME, &buffer);
DRM_DEBUG_DRIVER("vga_switcheroo: detected DSM switching method %s handle\n",
acpi_method_name);
return true;

View file

@ -59,7 +59,8 @@ int intel_digital_connector_atomic_get_property(struct drm_connector *connector,
else if (property == dev_priv->broadcast_rgb_property)
*val = intel_conn_state->broadcast_rgb;
else {
DRM_DEBUG_ATOMIC("Unknown property %s\n", property->name);
DRM_DEBUG_ATOMIC("Unknown property [PROP:%d:%s]\n",
property->base.id, property->name);
return -EINVAL;
}
@ -95,7 +96,8 @@ int intel_digital_connector_atomic_set_property(struct drm_connector *connector,
return 0;
}
DRM_DEBUG_ATOMIC("Unknown property %s\n", property->name);
DRM_DEBUG_ATOMIC("Unknown property [PROP:%d:%s]\n",
property->base.id, property->name);
return -EINVAL;
}

View file

@ -265,7 +265,8 @@ intel_plane_atomic_get_property(struct drm_plane *plane,
struct drm_property *property,
uint64_t *val)
{
DRM_DEBUG_KMS("Unknown plane property '%s'\n", property->name);
DRM_DEBUG_KMS("Unknown property [PROP:%d:%s]\n",
property->base.id, property->name);
return -EINVAL;
}
@ -287,6 +288,7 @@ intel_plane_atomic_set_property(struct drm_plane *plane,
struct drm_property *property,
uint64_t val)
{
DRM_DEBUG_KMS("Unknown plane property '%s'\n", property->name);
DRM_DEBUG_KMS("Unknown property [PROP:%d:%s]\n",
property->base.id, property->name);
return -EINVAL;
}

View file

@ -59,6 +59,7 @@
*/
/* DP N/M table */
#define LC_810M 810000
#define LC_540M 540000
#define LC_270M 270000
#define LC_162M 162000
@ -99,6 +100,15 @@ static const struct dp_aud_n_m dp_aud_n_m[] = {
{ 128000, LC_540M, 4096, 33750 },
{ 176400, LC_540M, 3136, 18750 },
{ 192000, LC_540M, 2048, 11250 },
{ 32000, LC_810M, 1024, 50625 },
{ 44100, LC_810M, 784, 28125 },
{ 48000, LC_810M, 512, 16875 },
{ 64000, LC_810M, 2048, 50625 },
{ 88200, LC_810M, 1568, 28125 },
{ 96000, LC_810M, 1024, 16875 },
{ 128000, LC_810M, 4096, 50625 },
{ 176400, LC_810M, 3136, 28125 },
{ 192000, LC_810M, 2048, 16875 },
};
static const struct dp_aud_n_m *
@ -198,13 +208,13 @@ static int audio_config_hdmi_get_n(const struct intel_crtc_state *crtc_state,
}
static bool intel_eld_uptodate(struct drm_connector *connector,
i915_reg_t reg_eldv, uint32_t bits_eldv,
i915_reg_t reg_elda, uint32_t bits_elda,
i915_reg_t reg_eldv, u32 bits_eldv,
i915_reg_t reg_elda, u32 bits_elda,
i915_reg_t reg_edid)
{
struct drm_i915_private *dev_priv = to_i915(connector->dev);
uint8_t *eld = connector->eld;
uint32_t tmp;
const u8 *eld = connector->eld;
u32 tmp;
int i;
tmp = I915_READ(reg_eldv);
@ -218,7 +228,7 @@ static bool intel_eld_uptodate(struct drm_connector *connector,
I915_WRITE(reg_elda, tmp);
for (i = 0; i < drm_eld_size(eld) / 4; i++)
if (I915_READ(reg_edid) != *((uint32_t *)eld + i))
if (I915_READ(reg_edid) != *((const u32 *)eld + i))
return false;
return true;
@ -229,7 +239,7 @@ static void g4x_audio_codec_disable(struct intel_encoder *encoder,
const struct drm_connector_state *old_conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
uint32_t eldv, tmp;
u32 eldv, tmp;
DRM_DEBUG_KMS("Disable audio codec\n");
@ -251,12 +261,12 @@ static void g4x_audio_codec_enable(struct intel_encoder *encoder,
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct drm_connector *connector = conn_state->connector;
uint8_t *eld = connector->eld;
uint32_t eldv;
uint32_t tmp;
const u8 *eld = connector->eld;
u32 eldv;
u32 tmp;
int len, i;
DRM_DEBUG_KMS("Enable audio codec, %u bytes ELD\n", eld[2]);
DRM_DEBUG_KMS("Enable audio codec, %u bytes ELD\n", drm_eld_size(eld));
tmp = I915_READ(G4X_AUD_VID_DID);
if (tmp == INTEL_AUDIO_DEVBLC || tmp == INTEL_AUDIO_DEVCL)
@ -278,7 +288,7 @@ static void g4x_audio_codec_enable(struct intel_encoder *encoder,
len = min(drm_eld_size(eld) / 4, len);
DRM_DEBUG_DRIVER("ELD size %d\n", len);
for (i = 0; i < len; i++)
I915_WRITE(G4X_HDMIW_HDMIEDID, *((uint32_t *)eld + i));
I915_WRITE(G4X_HDMIW_HDMIEDID, *((const u32 *)eld + i));
tmp = I915_READ(G4X_AUD_CNTL_ST);
tmp |= eldv;
@ -393,7 +403,7 @@ static void hsw_audio_codec_disable(struct intel_encoder *encoder,
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc);
enum pipe pipe = crtc->pipe;
uint32_t tmp;
u32 tmp;
DRM_DEBUG_KMS("Disable audio codec on pipe %c\n", pipe_name(pipe));
@ -426,8 +436,8 @@ static void hsw_audio_codec_enable(struct intel_encoder *encoder,
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct drm_connector *connector = conn_state->connector;
enum pipe pipe = crtc->pipe;
const uint8_t *eld = connector->eld;
uint32_t tmp;
const u8 *eld = connector->eld;
u32 tmp;
int len, i;
DRM_DEBUG_KMS("Enable audio codec on pipe %c, %u bytes ELD\n",
@ -456,7 +466,7 @@ static void hsw_audio_codec_enable(struct intel_encoder *encoder,
/* Up to 84 bytes of hw ELD buffer */
len = min(drm_eld_size(eld), 84);
for (i = 0; i < len / 4; i++)
I915_WRITE(HSW_AUD_EDID_DATA(pipe), *((uint32_t *)eld + i));
I915_WRITE(HSW_AUD_EDID_DATA(pipe), *((const u32 *)eld + i));
/* ELD valid */
tmp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD);
@ -477,7 +487,7 @@ static void ilk_audio_codec_disable(struct intel_encoder *encoder,
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc);
enum pipe pipe = crtc->pipe;
enum port port = encoder->port;
uint32_t tmp, eldv;
u32 tmp, eldv;
i915_reg_t aud_config, aud_cntrl_st2;
DRM_DEBUG_KMS("Disable audio codec on port %c, pipe %c\n",
@ -524,8 +534,8 @@ static void ilk_audio_codec_enable(struct intel_encoder *encoder,
struct drm_connector *connector = conn_state->connector;
enum pipe pipe = crtc->pipe;
enum port port = encoder->port;
uint8_t *eld = connector->eld;
uint32_t tmp, eldv;
const u8 *eld = connector->eld;
u32 tmp, eldv;
int len, i;
i915_reg_t hdmiw_hdmiedid, aud_config, aud_cntl_st, aud_cntrl_st2;
@ -575,7 +585,7 @@ static void ilk_audio_codec_enable(struct intel_encoder *encoder,
/* Up to 84 bytes of hw ELD buffer */
len = min(drm_eld_size(eld), 84);
for (i = 0; i < len / 4; i++)
I915_WRITE(hdmiw_hdmiedid, *((uint32_t *)eld + i));
I915_WRITE(hdmiw_hdmiedid, *((const u32 *)eld + i));
/* ELD valid */
tmp = I915_READ(aud_cntrl_st2);

View file

@ -652,7 +652,7 @@ parse_edp(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
}
if (bdb->version >= 173) {
uint8_t vswing;
u8 vswing;
/* Don't read from VBT if module parameter has valid value*/
if (i915_modparams.edp_vswing) {
@ -710,7 +710,9 @@ parse_psr(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
* New psr options 0=500us, 1=100us, 2=2500us, 3=0us
* Old decimal value is wake up time in multiples of 100 us.
*/
if (bdb->version >= 209 && IS_GEN9_BC(dev_priv)) {
if (bdb->version >= 205 &&
(IS_GEN9_BC(dev_priv) || IS_GEMINILAKE(dev_priv) ||
INTEL_GEN(dev_priv) >= 10)) {
switch (psr_table->tp1_wakeup_time) {
case 0:
dev_priv->vbt.psr.tp1_wakeup_time_us = 500;
@ -738,7 +740,7 @@ parse_psr(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
dev_priv->vbt.psr.tp2_tp3_wakeup_time_us = 100;
break;
case 3:
dev_priv->vbt.psr.tp1_wakeup_time_us = 0;
dev_priv->vbt.psr.tp2_tp3_wakeup_time_us = 0;
break;
default:
DRM_DEBUG_KMS("VBT tp2_tp3 wakeup time value %d is outside range[0-3], defaulting to max value 2500us\n",
@ -964,7 +966,7 @@ static int goto_next_sequence_v3(const u8 *data, int index, int total)
* includes MIPI_SEQ_ELEM_END byte, excludes the final MIPI_SEQ_END
* byte.
*/
size_of_sequence = *((const uint32_t *)(data + index));
size_of_sequence = *((const u32 *)(data + index));
index += 4;
seq_end = index + size_of_sequence;
@ -1719,7 +1721,7 @@ void intel_bios_init(struct drm_i915_private *dev_priv)
const struct bdb_header *bdb;
u8 __iomem *bios = NULL;
if (HAS_PCH_NOP(dev_priv)) {
if (INTEL_INFO(dev_priv)->num_pipes == 0) {
DRM_DEBUG_KMS("Skipping VBT init due to disabled display.\n");
return;
}

View file

@ -991,6 +991,16 @@ static void skl_set_cdclk(struct drm_i915_private *dev_priv,
u32 freq_select, cdclk_ctl;
int ret;
/*
* Based on WA#1183 CDCLK rates 308 and 617MHz CDCLK rates are
* unsupported on SKL. In theory this should never happen since only
* the eDP1.4 2.16 and 4.32Gbps rates require it, but eDP1.4 is not
* supported on SKL either, see the above WA. WARN whenever trying to
* use the corresponding VCO freq as that always leads to using the
* minimum 308MHz CDCLK.
*/
WARN_ON_ONCE(IS_SKYLAKE(dev_priv) && vco == 8640000);
mutex_lock(&dev_priv->pcu_lock);
ret = skl_pcode_request(dev_priv, SKL_PCODE_CDCLK_CONTROL,
SKL_CDCLK_PREPARE_FOR_CHANGE,
@ -1861,11 +1871,35 @@ static void icl_set_cdclk(struct drm_i915_private *dev_priv,
skl_cdclk_decimal(cdclk));
mutex_lock(&dev_priv->pcu_lock);
/* TODO: add proper DVFS support. */
sandybridge_pcode_write(dev_priv, SKL_PCODE_CDCLK_CONTROL, 2);
sandybridge_pcode_write(dev_priv, SKL_PCODE_CDCLK_CONTROL,
cdclk_state->voltage_level);
mutex_unlock(&dev_priv->pcu_lock);
intel_update_cdclk(dev_priv);
/*
* Can't read out the voltage level :(
* Let's just assume everything is as expected.
*/
dev_priv->cdclk.hw.voltage_level = cdclk_state->voltage_level;
}
static u8 icl_calc_voltage_level(int cdclk)
{
switch (cdclk) {
case 50000:
case 307200:
case 312000:
return 0;
case 556800:
case 552000:
return 1;
default:
MISSING_CASE(cdclk);
case 652800:
case 648000:
return 2;
}
}
static void icl_get_cdclk(struct drm_i915_private *dev_priv,
@ -1899,7 +1933,7 @@ static void icl_get_cdclk(struct drm_i915_private *dev_priv,
*/
cdclk_state->vco = 0;
cdclk_state->cdclk = cdclk_state->bypass;
return;
goto out;
}
cdclk_state->vco = (val & BXT_DE_PLL_RATIO_MASK) * cdclk_state->ref;
@ -1908,6 +1942,14 @@ static void icl_get_cdclk(struct drm_i915_private *dev_priv,
WARN_ON((val & BXT_CDCLK_CD2X_DIV_SEL_MASK) != 0);
cdclk_state->cdclk = cdclk_state->vco / 2;
out:
/*
* Can't read this out :( Let's assume it's
* at least what the CDCLK frequency requires.
*/
cdclk_state->voltage_level =
icl_calc_voltage_level(cdclk_state->cdclk);
}
/**
@ -1950,6 +1992,8 @@ void icl_init_cdclk(struct drm_i915_private *dev_priv)
sanitized_state.cdclk = icl_calc_cdclk(0, sanitized_state.ref);
sanitized_state.vco = icl_calc_cdclk_pll_vco(dev_priv,
sanitized_state.cdclk);
sanitized_state.voltage_level =
icl_calc_voltage_level(sanitized_state.cdclk);
icl_set_cdclk(dev_priv, &sanitized_state);
}
@ -1967,6 +2011,7 @@ void icl_uninit_cdclk(struct drm_i915_private *dev_priv)
cdclk_state.cdclk = cdclk_state.bypass;
cdclk_state.vco = 0;
cdclk_state.voltage_level = icl_calc_voltage_level(cdclk_state.cdclk);
icl_set_cdclk(dev_priv, &cdclk_state);
}
@ -2470,6 +2515,9 @@ static int icl_modeset_calc_cdclk(struct drm_atomic_state *state)
intel_state->cdclk.logical.vco = vco;
intel_state->cdclk.logical.cdclk = cdclk;
intel_state->cdclk.logical.voltage_level =
max(icl_calc_voltage_level(cdclk),
cnl_compute_min_voltage_level(intel_state));
if (!intel_state->active_crtcs) {
cdclk = icl_calc_cdclk(0, ref);
@ -2477,6 +2525,8 @@ static int icl_modeset_calc_cdclk(struct drm_atomic_state *state)
intel_state->cdclk.actual.vco = vco;
intel_state->cdclk.actual.cdclk = cdclk;
intel_state->cdclk.actual.voltage_level =
icl_calc_voltage_level(cdclk);
} else {
intel_state->cdclk.actual = intel_state->cdclk.logical;
}

View file

@ -232,6 +232,8 @@ static void hsw_post_disable_crt(struct intel_encoder *encoder,
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
intel_ddi_disable_pipe_clock(old_crtc_state);
pch_post_disable_crt(encoder, old_crtc_state, old_conn_state);
lpt_disable_pch_transcoder(dev_priv);
@ -268,6 +270,8 @@ static void hsw_pre_enable_crt(struct intel_encoder *encoder,
intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, false);
dev_priv->display.fdi_link_train(crtc, crtc_state);
intel_ddi_enable_pipe_clock(crtc_state);
}
static void hsw_enable_crt(struct intel_encoder *encoder,
@ -304,6 +308,9 @@ intel_crt_mode_valid(struct drm_connector *connector,
int max_dotclk = dev_priv->max_dotclk_freq;
int max_clock;
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
return MODE_NO_DBLESCAN;
if (mode->clock < 25000)
return MODE_CLOCK_LOW;
@ -330,6 +337,10 @@ intel_crt_mode_valid(struct drm_connector *connector,
(ironlake_get_lanes_required(mode->clock, 270000, 24) > 2))
return MODE_CLOCK_HIGH;
/* HSW/BDW FDI limited to 4k */
if (mode->hdisplay > 4096)
return MODE_H_ILLEGAL;
return MODE_OK;
}
@ -337,6 +348,12 @@ static bool intel_crt_compute_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config,
struct drm_connector_state *conn_state)
{
struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
return false;
return true;
}
@ -344,6 +361,12 @@ static bool pch_crt_compute_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config,
struct drm_connector_state *conn_state)
{
struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
return false;
pipe_config->has_pch_encoder = true;
return true;
@ -354,6 +377,16 @@ static bool hsw_crt_compute_config(struct intel_encoder *encoder,
struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
return false;
/* HSW/BDW FDI limited to 4k */
if (adjusted_mode->crtc_hdisplay > 4096 ||
adjusted_mode->crtc_hblank_start > 4096)
return false;
pipe_config->has_pch_encoder = true;
@ -493,7 +526,7 @@ static bool intel_crt_detect_hotplug(struct drm_connector *connector)
* to get a reliable result.
*/
if (IS_G4X(dev_priv) && !IS_GM45(dev_priv))
if (IS_G45(dev_priv))
tries = 2;
else
tries = 1;

View file

@ -915,7 +915,14 @@ static int intel_ddi_hdmi_level(struct drm_i915_private *dev_priv, enum port por
level = dev_priv->vbt.ddi_port_info[port].hdmi_level_shift;
if (IS_CANNONLAKE(dev_priv)) {
if (IS_ICELAKE(dev_priv)) {
if (port == PORT_A || port == PORT_B)
icl_get_combo_buf_trans(dev_priv, port,
INTEL_OUTPUT_HDMI, &n_entries);
else
n_entries = ARRAY_SIZE(icl_mg_phy_ddi_translations);
default_entry = n_entries - 1;
} else if (IS_CANNONLAKE(dev_priv)) {
cnl_get_buf_trans_hdmi(dev_priv, &n_entries);
default_entry = n_entries - 1;
} else if (IS_GEN9_LP(dev_priv)) {
@ -1055,6 +1062,8 @@ static uint32_t hsw_pll_to_ddi_pll_sel(const struct intel_shared_dpll *pll)
static uint32_t icl_pll_to_ddi_pll_sel(struct intel_encoder *encoder,
const struct intel_shared_dpll *pll)
{
struct intel_crtc *crtc = to_intel_crtc(encoder->base.crtc);
int clock = crtc->config->port_clock;
const enum intel_dpll_id id = pll->info->id;
switch (id) {
@ -1063,6 +1072,20 @@ static uint32_t icl_pll_to_ddi_pll_sel(struct intel_encoder *encoder,
case DPLL_ID_ICL_DPLL0:
case DPLL_ID_ICL_DPLL1:
return DDI_CLK_SEL_NONE;
case DPLL_ID_ICL_TBTPLL:
switch (clock) {
case 162000:
return DDI_CLK_SEL_TBT_162;
case 270000:
return DDI_CLK_SEL_TBT_270;
case 540000:
return DDI_CLK_SEL_TBT_540;
case 810000:
return DDI_CLK_SEL_TBT_810;
default:
MISSING_CASE(clock);
break;
}
case DPLL_ID_ICL_MGPLL1:
case DPLL_ID_ICL_MGPLL2:
case DPLL_ID_ICL_MGPLL3:
@ -2632,6 +2655,8 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
intel_dp_start_link_train(intel_dp);
if (port != PORT_A || INTEL_GEN(dev_priv) >= 9)
intel_dp_stop_link_train(intel_dp);
intel_ddi_enable_pipe_clock(crtc_state);
}
static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder,
@ -2662,6 +2687,8 @@ static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder,
if (IS_GEN9_BC(dev_priv))
skl_ddi_set_iboost(encoder, level, INTEL_OUTPUT_HDMI);
intel_ddi_enable_pipe_clock(crtc_state);
intel_dig_port->set_infoframes(&encoder->base,
crtc_state->has_infoframe,
crtc_state, conn_state);
@ -2731,6 +2758,8 @@ static void intel_ddi_post_disable_dp(struct intel_encoder *encoder,
bool is_mst = intel_crtc_has_type(old_crtc_state,
INTEL_OUTPUT_DP_MST);
intel_ddi_disable_pipe_clock(old_crtc_state);
/*
* Power down sink before disabling the port, otherwise we end
* up getting interrupts from the sink on detecting link loss.
@ -2756,11 +2785,13 @@ static void intel_ddi_post_disable_hdmi(struct intel_encoder *encoder,
struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
struct intel_hdmi *intel_hdmi = &dig_port->hdmi;
intel_disable_ddi_buf(encoder);
dig_port->set_infoframes(&encoder->base, false,
old_crtc_state, old_conn_state);
intel_ddi_disable_pipe_clock(old_crtc_state);
intel_disable_ddi_buf(encoder);
intel_display_power_put(dev_priv, dig_port->ddi_io_power_domain);
intel_ddi_clk_disable(encoder);
@ -3047,6 +3078,8 @@ void intel_ddi_compute_min_voltage_level(struct drm_i915_private *dev_priv,
{
if (IS_CANNONLAKE(dev_priv) && crtc_state->port_clock > 594000)
crtc_state->min_voltage_level = 2;
else if (IS_ICELAKE(dev_priv) && crtc_state->port_clock > 594000)
crtc_state->min_voltage_level = 1;
}
void intel_ddi_get_config(struct intel_encoder *encoder,

View file

@ -5646,9 +5646,6 @@ static void haswell_crtc_enable(struct intel_crtc_state *pipe_config,
intel_encoders_pre_enable(crtc, pipe_config, old_state);
if (!transcoder_is_dsi(cpu_transcoder))
intel_ddi_enable_pipe_clock(pipe_config);
if (intel_crtc_has_dp_encoder(intel_crtc->config))
intel_dp_set_m_n(intel_crtc, M1_N1);
@ -5813,7 +5810,7 @@ static void haswell_crtc_disable(struct intel_crtc_state *old_crtc_state,
struct drm_crtc *crtc = old_crtc_state->base.crtc;
struct drm_i915_private *dev_priv = to_i915(crtc->dev);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
enum transcoder cpu_transcoder = intel_crtc->config->cpu_transcoder;
enum transcoder cpu_transcoder = old_crtc_state->cpu_transcoder;
intel_encoders_disable(crtc, old_crtc_state, old_state);
@ -5824,8 +5821,8 @@ static void haswell_crtc_disable(struct intel_crtc_state *old_crtc_state,
if (!transcoder_is_dsi(cpu_transcoder))
intel_disable_pipe(old_crtc_state);
if (intel_crtc_has_type(intel_crtc->config, INTEL_OUTPUT_DP_MST))
intel_ddi_set_vc_payload_alloc(intel_crtc->config, false);
if (intel_crtc_has_type(old_crtc_state, INTEL_OUTPUT_DP_MST))
intel_ddi_set_vc_payload_alloc(old_crtc_state, false);
if (!transcoder_is_dsi(cpu_transcoder))
intel_ddi_disable_transcoder_func(dev_priv, cpu_transcoder);
@ -5835,9 +5832,6 @@ static void haswell_crtc_disable(struct intel_crtc_state *old_crtc_state,
else
ironlake_pfit_disable(intel_crtc, false);
if (!transcoder_is_dsi(cpu_transcoder))
intel_ddi_disable_pipe_clock(intel_crtc->config);
intel_encoders_post_disable(crtc, old_crtc_state, old_state);
if (INTEL_GEN(dev_priv) >= 11)
@ -9214,6 +9208,44 @@ static void cannonlake_get_ddi_pll(struct drm_i915_private *dev_priv,
pipe_config->shared_dpll = intel_get_shared_dpll_by_id(dev_priv, id);
}
static void icelake_get_ddi_pll(struct drm_i915_private *dev_priv,
enum port port,
struct intel_crtc_state *pipe_config)
{
enum intel_dpll_id id;
u32 temp;
/* TODO: TBT pll not implemented. */
switch (port) {
case PORT_A:
case PORT_B:
temp = I915_READ(DPCLKA_CFGCR0_ICL) &
DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(port);
id = temp >> DPCLKA_CFGCR0_DDI_CLK_SEL_SHIFT(port);
if (WARN_ON(id != DPLL_ID_ICL_DPLL0 && id != DPLL_ID_ICL_DPLL1))
return;
break;
case PORT_C:
id = DPLL_ID_ICL_MGPLL1;
break;
case PORT_D:
id = DPLL_ID_ICL_MGPLL2;
break;
case PORT_E:
id = DPLL_ID_ICL_MGPLL3;
break;
case PORT_F:
id = DPLL_ID_ICL_MGPLL4;
break;
default:
MISSING_CASE(port);
return;
}
pipe_config->shared_dpll = intel_get_shared_dpll_by_id(dev_priv, id);
}
static void bxt_get_ddi_pll(struct drm_i915_private *dev_priv,
enum port port,
struct intel_crtc_state *pipe_config)
@ -9401,7 +9433,9 @@ static void haswell_get_ddi_port_state(struct intel_crtc *crtc,
port = (tmp & TRANS_DDI_PORT_MASK) >> TRANS_DDI_PORT_SHIFT;
if (IS_CANNONLAKE(dev_priv))
if (IS_ICELAKE(dev_priv))
icelake_get_ddi_pll(dev_priv, port, pipe_config);
else if (IS_CANNONLAKE(dev_priv))
cannonlake_get_ddi_pll(dev_priv, port, pipe_config);
else if (IS_GEN9_BC(dev_priv))
skylake_get_ddi_pll(dev_priv, port, pipe_config);
@ -14054,7 +14088,14 @@ static void intel_setup_outputs(struct drm_i915_private *dev_priv)
if (intel_crt_present(dev_priv))
intel_crt_init(dev_priv);
if (IS_GEN9_LP(dev_priv)) {
if (IS_ICELAKE(dev_priv)) {
intel_ddi_init(dev_priv, PORT_A);
intel_ddi_init(dev_priv, PORT_B);
intel_ddi_init(dev_priv, PORT_C);
intel_ddi_init(dev_priv, PORT_D);
intel_ddi_init(dev_priv, PORT_E);
intel_ddi_init(dev_priv, PORT_F);
} else if (IS_GEN9_LP(dev_priv)) {
/*
* FIXME: Broxton doesn't support port detection via the
* DDI_BUF_CTL_A or SFUSE_STRAP registers, find another way to
@ -14572,12 +14613,26 @@ static enum drm_mode_status
intel_mode_valid(struct drm_device *dev,
const struct drm_display_mode *mode)
{
struct drm_i915_private *dev_priv = to_i915(dev);
int hdisplay_max, htotal_max;
int vdisplay_max, vtotal_max;
/*
* Can't reject DBLSCAN here because Xorg ddxen can add piles
* of DBLSCAN modes to the output's mode list when they detect
* the scaling mode property on the connector. And they don't
* ask the kernel to validate those modes in any way until
* modeset time at which point the client gets a protocol error.
* So in order to not upset those clients we silently ignore the
* DBLSCAN flag on such connectors. For other connectors we will
* reject modes with the DBLSCAN flag in encoder->compute_config().
* And we always reject DBLSCAN modes in connector->mode_valid()
* as we never want such modes on the connector's mode list.
*/
if (mode->vscan > 1)
return MODE_NO_VSCAN;
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
return MODE_NO_DBLESCAN;
if (mode->flags & DRM_MODE_FLAG_HSKEW)
return MODE_H_ILLEGAL;
@ -14591,6 +14646,36 @@ intel_mode_valid(struct drm_device *dev,
DRM_MODE_FLAG_CLKDIV2))
return MODE_BAD;
if (INTEL_GEN(dev_priv) >= 9 ||
IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv)) {
hdisplay_max = 8192; /* FDI max 4096 handled elsewhere */
vdisplay_max = 4096;
htotal_max = 8192;
vtotal_max = 8192;
} else if (INTEL_GEN(dev_priv) >= 3) {
hdisplay_max = 4096;
vdisplay_max = 4096;
htotal_max = 8192;
vtotal_max = 8192;
} else {
hdisplay_max = 2048;
vdisplay_max = 2048;
htotal_max = 4096;
vtotal_max = 4096;
}
if (mode->hdisplay > hdisplay_max ||
mode->hsync_start > htotal_max ||
mode->hsync_end > htotal_max ||
mode->htotal > htotal_max)
return MODE_H_ILLEGAL;
if (mode->vdisplay > vdisplay_max ||
mode->vsync_start > vtotal_max ||
mode->vsync_end > vtotal_max ||
mode->vtotal > vtotal_max)
return MODE_V_ILLEGAL;
return MODE_OK;
}
@ -15029,6 +15114,7 @@ int intel_modeset_init(struct drm_device *dev)
}
}
/* maximum framebuffer dimensions */
if (IS_GEN2(dev_priv)) {
dev->mode_config.max_width = 2048;
dev->mode_config.max_height = 2048;
@ -15044,11 +15130,11 @@ int intel_modeset_init(struct drm_device *dev)
dev->mode_config.cursor_width = IS_I845G(dev_priv) ? 64 : 512;
dev->mode_config.cursor_height = 1023;
} else if (IS_GEN2(dev_priv)) {
dev->mode_config.cursor_width = GEN2_CURSOR_WIDTH;
dev->mode_config.cursor_height = GEN2_CURSOR_HEIGHT;
dev->mode_config.cursor_width = 64;
dev->mode_config.cursor_height = 64;
} else {
dev->mode_config.cursor_width = MAX_CURSOR_WIDTH;
dev->mode_config.cursor_height = MAX_CURSOR_HEIGHT;
dev->mode_config.cursor_width = 256;
dev->mode_config.cursor_height = 256;
}
dev->mode_config.fb_base = ggtt->gmadr.start;

View file

@ -155,7 +155,7 @@ enum aux_ch {
AUX_CH_B,
AUX_CH_C,
AUX_CH_D,
_AUX_CH_E, /* does not exist */
AUX_CH_E, /* ICL+ */
AUX_CH_F,
};
@ -196,6 +196,7 @@ enum intel_display_power_domain {
POWER_DOMAIN_AUX_B,
POWER_DOMAIN_AUX_C,
POWER_DOMAIN_AUX_D,
POWER_DOMAIN_AUX_E,
POWER_DOMAIN_AUX_F,
POWER_DOMAIN_AUX_IO_A,
POWER_DOMAIN_GMBUS,

View file

@ -256,6 +256,17 @@ static int cnl_max_source_rate(struct intel_dp *intel_dp)
return 810000;
}
static int icl_max_source_rate(struct intel_dp *intel_dp)
{
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
enum port port = dig_port->base.port;
if (port == PORT_B)
return 540000;
return 810000;
}
static void
intel_dp_set_source_rates(struct intel_dp *intel_dp)
{
@ -285,10 +296,13 @@ intel_dp_set_source_rates(struct intel_dp *intel_dp)
/* This should only be done once */
WARN_ON(intel_dp->source_rates || intel_dp->num_source_rates);
if (IS_CANNONLAKE(dev_priv)) {
if (INTEL_GEN(dev_priv) >= 10) {
source_rates = cnl_rates;
size = ARRAY_SIZE(cnl_rates);
max_rate = cnl_max_source_rate(intel_dp);
if (INTEL_GEN(dev_priv) == 10)
max_rate = cnl_max_source_rate(intel_dp);
else
max_rate = icl_max_source_rate(intel_dp);
} else if (IS_GEN9_LP(dev_priv)) {
source_rates = bxt_rates;
size = ARRAY_SIZE(bxt_rates);
@ -420,6 +434,9 @@ intel_dp_mode_valid(struct drm_connector *connector,
int max_rate, mode_rate, max_lanes, max_link_clock;
int max_dotclk;
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
return MODE_NO_DBLESCAN;
max_dotclk = intel_dp_downstream_max_dotclock(intel_dp);
if (intel_dp_is_edp(intel_dp) && fixed_mode) {
@ -1347,6 +1364,9 @@ static enum aux_ch intel_aux_ch(struct intel_dp *intel_dp)
case DP_AUX_D:
aux_ch = AUX_CH_D;
break;
case DP_AUX_E:
aux_ch = AUX_CH_E;
break;
case DP_AUX_F:
aux_ch = AUX_CH_F;
break;
@ -1374,6 +1394,8 @@ intel_aux_power_domain(struct intel_dp *intel_dp)
return POWER_DOMAIN_AUX_C;
case AUX_CH_D:
return POWER_DOMAIN_AUX_D;
case AUX_CH_E:
return POWER_DOMAIN_AUX_E;
case AUX_CH_F:
return POWER_DOMAIN_AUX_F;
default:
@ -1460,6 +1482,7 @@ static i915_reg_t skl_aux_ctl_reg(struct intel_dp *intel_dp)
case AUX_CH_B:
case AUX_CH_C:
case AUX_CH_D:
case AUX_CH_E:
case AUX_CH_F:
return DP_AUX_CH_CTL(aux_ch);
default:
@ -1478,6 +1501,7 @@ static i915_reg_t skl_aux_data_reg(struct intel_dp *intel_dp, int index)
case AUX_CH_B:
case AUX_CH_C:
case AUX_CH_D:
case AUX_CH_E:
case AUX_CH_F:
return DP_AUX_CH_DATA(aux_ch, index);
default:
@ -1541,6 +1565,13 @@ bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp)
return max_rate >= 540000;
}
bool intel_dp_source_supports_hbr3(struct intel_dp *intel_dp)
{
int max_rate = intel_dp->source_rates[intel_dp->num_source_rates - 1];
return max_rate >= 810000;
}
static void
intel_dp_set_clock(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config)
@ -1862,7 +1893,10 @@ intel_dp_compute_config(struct intel_encoder *encoder,
conn_state->scaling_mode);
}
if ((IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) &&
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
return false;
if (HAS_GMCH_DISPLAY(dev_priv) &&
adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE)
return false;
@ -2796,16 +2830,6 @@ static void intel_disable_dp(struct intel_encoder *encoder,
static void g4x_disable_dp(struct intel_encoder *encoder,
const struct intel_crtc_state *old_crtc_state,
const struct drm_connector_state *old_conn_state)
{
intel_disable_dp(encoder, old_crtc_state, old_conn_state);
/* disable the port before the pipe on g4x */
intel_dp_link_down(encoder, old_crtc_state);
}
static void ilk_disable_dp(struct intel_encoder *encoder,
const struct intel_crtc_state *old_crtc_state,
const struct drm_connector_state *old_conn_state)
{
intel_disable_dp(encoder, old_crtc_state, old_conn_state);
}
@ -2821,13 +2845,19 @@ static void vlv_disable_dp(struct intel_encoder *encoder,
intel_disable_dp(encoder, old_crtc_state, old_conn_state);
}
static void ilk_post_disable_dp(struct intel_encoder *encoder,
static void g4x_post_disable_dp(struct intel_encoder *encoder,
const struct intel_crtc_state *old_crtc_state,
const struct drm_connector_state *old_conn_state)
{
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
enum port port = encoder->port;
/*
* Bspec does not list a specific disable sequence for g4x DP.
* Follow the ilk+ sequence (disable pipe before the port) for
* g4x DP as it does not suffer from underruns like the normal
* g4x modeset sequence (disable pipe after the port).
*/
intel_dp_link_down(encoder, old_crtc_state);
/* Only ilk+ has port A */
@ -2866,10 +2896,11 @@ _intel_dp_set_link_train(struct intel_dp *intel_dp,
struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp));
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
enum port port = intel_dig_port->base.port;
uint8_t train_pat_mask = drm_dp_training_pattern_mask(intel_dp->dpcd);
if (dp_train_pat & DP_TRAINING_PATTERN_MASK)
if (dp_train_pat & train_pat_mask)
DRM_DEBUG_KMS("Using DP training pattern TPS%d\n",
dp_train_pat & DP_TRAINING_PATTERN_MASK);
dp_train_pat & train_pat_mask);
if (HAS_DDI(dev_priv)) {
uint32_t temp = I915_READ(DP_TP_CTL(port));
@ -2880,7 +2911,7 @@ _intel_dp_set_link_train(struct intel_dp *intel_dp,
temp &= ~DP_TP_CTL_SCRAMBLE_DISABLE;
temp &= ~DP_TP_CTL_LINK_TRAIN_MASK;
switch (dp_train_pat & DP_TRAINING_PATTERN_MASK) {
switch (dp_train_pat & train_pat_mask) {
case DP_TRAINING_PATTERN_DISABLE:
temp |= DP_TP_CTL_LINK_TRAIN_NORMAL;
@ -2894,6 +2925,9 @@ _intel_dp_set_link_train(struct intel_dp *intel_dp,
case DP_TRAINING_PATTERN_3:
temp |= DP_TP_CTL_LINK_TRAIN_PAT3;
break;
case DP_TRAINING_PATTERN_4:
temp |= DP_TP_CTL_LINK_TRAIN_PAT4;
break;
}
I915_WRITE(DP_TP_CTL(port), temp);
@ -6344,7 +6378,7 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
drm_connector_init(dev, connector, &intel_dp_connector_funcs, type);
drm_connector_helper_add(connector, &intel_dp_connector_helper_funcs);
if (!IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv))
if (!HAS_GMCH_DISPLAY(dev_priv))
connector->interlace_allowed = true;
connector->doublescan_allowed = 0;
@ -6387,7 +6421,7 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
* 0xd. Failure to do so will result in spurious interrupts being
* generated on the port when a cable is not attached.
*/
if (IS_G4X(dev_priv) && !IS_GM45(dev_priv)) {
if (IS_G45(dev_priv)) {
u32 temp = I915_READ(PEG_BAND_GAP_DATA);
I915_WRITE(PEG_BAND_GAP_DATA, (temp & ~0xf) | 0xd);
}
@ -6443,15 +6477,11 @@ bool intel_dp_init(struct drm_i915_private *dev_priv,
intel_encoder->enable = vlv_enable_dp;
intel_encoder->disable = vlv_disable_dp;
intel_encoder->post_disable = vlv_post_disable_dp;
} else if (INTEL_GEN(dev_priv) >= 5) {
intel_encoder->pre_enable = g4x_pre_enable_dp;
intel_encoder->enable = g4x_enable_dp;
intel_encoder->disable = ilk_disable_dp;
intel_encoder->post_disable = ilk_post_disable_dp;
} else {
intel_encoder->pre_enable = g4x_pre_enable_dp;
intel_encoder->enable = g4x_enable_dp;
intel_encoder->disable = g4x_disable_dp;
intel_encoder->post_disable = g4x_post_disable_dp;
}
intel_dig_port->dp.output_reg = output_reg;

View file

@ -26,7 +26,7 @@
static void set_aux_backlight_enable(struct intel_dp *intel_dp, bool enable)
{
uint8_t reg_val = 0;
u8 reg_val = 0;
/* Early return when display use other mechanism to enable backlight. */
if (!(intel_dp->edp_dpcd[1] & DP_EDP_BACKLIGHT_AUX_ENABLE_CAP))
@ -54,11 +54,11 @@ static void set_aux_backlight_enable(struct intel_dp *intel_dp, bool enable)
* Read the current backlight value from DPCD register(s) based
* on if 8-bit(MSB) or 16-bit(MSB and LSB) values are supported
*/
static uint32_t intel_dp_aux_get_backlight(struct intel_connector *connector)
static u32 intel_dp_aux_get_backlight(struct intel_connector *connector)
{
struct intel_dp *intel_dp = enc_to_intel_dp(&connector->encoder->base);
uint8_t read_val[2] = { 0x0 };
uint16_t level = 0;
u8 read_val[2] = { 0x0 };
u16 level = 0;
if (drm_dp_dpcd_read(&intel_dp->aux, DP_EDP_BACKLIGHT_BRIGHTNESS_MSB,
&read_val, sizeof(read_val)) < 0) {
@ -82,7 +82,7 @@ intel_dp_aux_set_backlight(const struct drm_connector_state *conn_state, u32 lev
{
struct intel_connector *connector = to_intel_connector(conn_state->connector);
struct intel_dp *intel_dp = enc_to_intel_dp(&connector->encoder->base);
uint8_t vals[2] = { 0x0 };
u8 vals[2] = { 0x0 };
vals[0] = level;
@ -178,7 +178,7 @@ static void intel_dp_aux_enable_backlight(const struct intel_crtc_state *crtc_st
{
struct intel_connector *connector = to_intel_connector(conn_state->connector);
struct intel_dp *intel_dp = enc_to_intel_dp(&connector->encoder->base);
uint8_t dpcd_buf, new_dpcd_buf, edp_backlight_mode;
u8 dpcd_buf, new_dpcd_buf, edp_backlight_mode;
if (drm_dp_dpcd_readb(&intel_dp->aux,
DP_EDP_BACKLIGHT_MODE_SET_REGISTER, &dpcd_buf) != 1) {

View file

@ -219,14 +219,30 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp)
}
/*
* Pick training pattern for channel equalization. Training Pattern 3 for HBR2
* Pick training pattern for channel equalization. Training pattern 4 for HBR3
* or for 1.4 devices that support it, training Pattern 3 for HBR2
* or 1.2 devices that support it, Training Pattern 2 otherwise.
*/
static u32 intel_dp_training_pattern(struct intel_dp *intel_dp)
{
u32 training_pattern = DP_TRAINING_PATTERN_2;
bool source_tps3, sink_tps3;
bool source_tps3, sink_tps3, source_tps4, sink_tps4;
/*
* Intel platforms that support HBR3 also support TPS4. It is mandatory
* for all downstream devices that support HBR3. There are no known eDP
* panels that support TPS4 as of Feb 2018 as per VESA eDP_v1.4b_E1
* specification.
*/
source_tps4 = intel_dp_source_supports_hbr3(intel_dp);
sink_tps4 = drm_dp_tps4_supported(intel_dp->dpcd);
if (source_tps4 && sink_tps4) {
return DP_TRAINING_PATTERN_4;
} else if (intel_dp->link_rate == 810000) {
if (!source_tps4)
DRM_DEBUG_KMS("8.1 Gbps link rate without source HBR3/TPS4 support\n");
if (!sink_tps4)
DRM_DEBUG_KMS("8.1 Gbps link rate without sink TPS4 support\n");
}
/*
* Intel platforms that support HBR2 also support TPS3. TPS3 support is
* also mandatory for downstream devices that support HBR2. However, not
@ -234,17 +250,16 @@ static u32 intel_dp_training_pattern(struct intel_dp *intel_dp)
*/
source_tps3 = intel_dp_source_supports_hbr2(intel_dp);
sink_tps3 = drm_dp_tps3_supported(intel_dp->dpcd);
if (source_tps3 && sink_tps3) {
training_pattern = DP_TRAINING_PATTERN_3;
} else if (intel_dp->link_rate == 540000) {
return DP_TRAINING_PATTERN_3;
} else if (intel_dp->link_rate >= 540000) {
if (!source_tps3)
DRM_DEBUG_KMS("5.4 Gbps link rate without source HBR2/TPS3 support\n");
DRM_DEBUG_KMS(">=5.4/6.48 Gbps link rate without source HBR2/TPS3 support\n");
if (!sink_tps3)
DRM_DEBUG_KMS("5.4 Gbps link rate without sink TPS3 support\n");
DRM_DEBUG_KMS(">=5.4/6.48 Gbps link rate without sink TPS3 support\n");
}
return training_pattern;
return DP_TRAINING_PATTERN_2;
}
static bool
@ -256,11 +271,13 @@ intel_dp_link_training_channel_equalization(struct intel_dp *intel_dp)
bool channel_eq = false;
training_pattern = intel_dp_training_pattern(intel_dp);
/* Scrambling is disabled for TPS2/3 and enabled for TPS4 */
if (training_pattern != DP_TRAINING_PATTERN_4)
training_pattern |= DP_LINK_SCRAMBLING_DISABLE;
/* channel equalization */
if (!intel_dp_set_link_train(intel_dp,
training_pattern |
DP_LINK_SCRAMBLING_DISABLE)) {
training_pattern)) {
DRM_ERROR("failed to start channel equalization\n");
return false;
}

View file

@ -48,6 +48,9 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder,
bool reduce_m_n = drm_dp_has_quirk(&intel_dp->desc,
DP_DPCD_QUIRK_LIMITED_M_N);
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
return false;
pipe_config->has_pch_encoder = false;
bpp = 24;
if (intel_dp->compliance.test_data.bpc) {
@ -366,6 +369,9 @@ intel_dp_mst_mode_valid(struct drm_connector *connector,
if (!intel_dp)
return MODE_ERROR;
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
return MODE_NO_DBLESCAN;
max_link_clock = intel_dp_max_link_rate(intel_dp);
max_lanes = intel_dp_max_lane_count(intel_dp);

View file

@ -2857,10 +2857,17 @@ icl_get_dpll(struct intel_crtc *crtc, struct intel_crtc_state *crtc_state,
case PORT_D:
case PORT_E:
case PORT_F:
min = icl_port_to_mg_pll_id(port);
max = min;
ret = icl_calc_mg_pll_state(crtc_state, encoder, clock,
&pll_state);
if (0 /* TODO: TBT PLLs */) {
min = DPLL_ID_ICL_TBTPLL;
max = min;
ret = icl_calc_dpll_state(crtc_state, encoder, clock,
&pll_state);
} else {
min = icl_port_to_mg_pll_id(port);
max = min;
ret = icl_calc_mg_pll_state(crtc_state, encoder, clock,
&pll_state);
}
break;
default:
MISSING_CASE(port);
@ -2893,6 +2900,8 @@ static i915_reg_t icl_pll_id_to_enable_reg(enum intel_dpll_id id)
case DPLL_ID_ICL_DPLL0:
case DPLL_ID_ICL_DPLL1:
return CNL_DPLL_ENABLE(id);
case DPLL_ID_ICL_TBTPLL:
return TBT_PLL_ENABLE;
case DPLL_ID_ICL_MGPLL1:
case DPLL_ID_ICL_MGPLL2:
case DPLL_ID_ICL_MGPLL3:
@ -2920,6 +2929,7 @@ static bool icl_pll_get_hw_state(struct drm_i915_private *dev_priv,
switch (id) {
case DPLL_ID_ICL_DPLL0:
case DPLL_ID_ICL_DPLL1:
case DPLL_ID_ICL_TBTPLL:
hw_state->cfgcr0 = I915_READ(ICL_DPLL_CFGCR0(id));
hw_state->cfgcr1 = I915_READ(ICL_DPLL_CFGCR1(id));
break;
@ -3006,6 +3016,7 @@ static void icl_pll_enable(struct drm_i915_private *dev_priv,
switch (id) {
case DPLL_ID_ICL_DPLL0:
case DPLL_ID_ICL_DPLL1:
case DPLL_ID_ICL_TBTPLL:
icl_dpll_write(dev_priv, pll);
break;
case DPLL_ID_ICL_MGPLL1:
@ -3104,6 +3115,7 @@ static const struct intel_shared_dpll_funcs icl_pll_funcs = {
static const struct dpll_info icl_plls[] = {
{ "DPLL 0", &icl_pll_funcs, DPLL_ID_ICL_DPLL0, 0 },
{ "DPLL 1", &icl_pll_funcs, DPLL_ID_ICL_DPLL1, 0 },
{ "TBT PLL", &icl_pll_funcs, DPLL_ID_ICL_TBTPLL, 0 },
{ "MG PLL 1", &icl_pll_funcs, DPLL_ID_ICL_MGPLL1, 0 },
{ "MG PLL 2", &icl_pll_funcs, DPLL_ID_ICL_MGPLL2, 0 },
{ "MG PLL 3", &icl_pll_funcs, DPLL_ID_ICL_MGPLL3, 0 },

View file

@ -113,24 +113,28 @@ enum intel_dpll_id {
* @DPLL_ID_ICL_DPLL1: ICL combo PHY DPLL1
*/
DPLL_ID_ICL_DPLL1 = 1,
/**
* @DPLL_ID_ICL_TBTPLL: ICL TBT PLL
*/
DPLL_ID_ICL_TBTPLL = 2,
/**
* @DPLL_ID_ICL_MGPLL1: ICL MG PLL 1 port 1 (C)
*/
DPLL_ID_ICL_MGPLL1 = 2,
DPLL_ID_ICL_MGPLL1 = 3,
/**
* @DPLL_ID_ICL_MGPLL2: ICL MG PLL 1 port 2 (D)
*/
DPLL_ID_ICL_MGPLL2 = 3,
DPLL_ID_ICL_MGPLL2 = 4,
/**
* @DPLL_ID_ICL_MGPLL3: ICL MG PLL 1 port 3 (E)
*/
DPLL_ID_ICL_MGPLL3 = 4,
DPLL_ID_ICL_MGPLL3 = 5,
/**
* @DPLL_ID_ICL_MGPLL4: ICL MG PLL 1 port 4 (F)
*/
DPLL_ID_ICL_MGPLL4 = 5,
DPLL_ID_ICL_MGPLL4 = 6,
};
#define I915_NUM_PLLS 6
#define I915_NUM_PLLS 7
struct intel_dpll_hw_state {
/* i9xx, pch plls */

View file

@ -158,12 +158,6 @@
#define MAX_OUTPUTS 6
/* maximum connectors per crtcs in the mode set */
/* Maximum cursor sizes */
#define GEN2_CURSOR_WIDTH 64
#define GEN2_CURSOR_HEIGHT 64
#define MAX_CURSOR_WIDTH 256
#define MAX_CURSOR_HEIGHT 256
#define INTEL_I2C_BUS_DVO 1
#define INTEL_I2C_BUS_SDVO 2
@ -1716,6 +1710,7 @@ intel_dp_pre_emphasis_max(struct intel_dp *intel_dp, uint8_t voltage_swing);
void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
uint8_t *link_bw, uint8_t *rate_select);
bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp);
bool intel_dp_source_supports_hbr3(struct intel_dp *intel_dp);
bool
intel_dp_get_link_status(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_STATUS_SIZE]);

View file

@ -326,6 +326,9 @@ static bool intel_dsi_compute_config(struct intel_encoder *encoder,
conn_state->scaling_mode);
}
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
return false;
/* DSI uses short packets for sync events, so clear mode flags for DSI */
adjusted_mode->flags = 0;
@ -1266,6 +1269,9 @@ intel_dsi_mode_valid(struct drm_connector *connector,
DRM_DEBUG_KMS("\n");
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
return MODE_NO_DBLESCAN;
if (fixed_mode) {
if (mode->hdisplay > fixed_mode->hdisplay)
return MODE_PANEL;

View file

@ -215,6 +215,9 @@ intel_dvo_mode_valid(struct drm_connector *connector,
int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
int target_clock = mode->clock;
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
return MODE_NO_DBLESCAN;
/* XXX: Validate clock range */
if (fixed_mode) {
@ -250,6 +253,9 @@ static bool intel_dvo_compute_config(struct intel_encoder *encoder,
if (fixed_mode)
intel_fixed_panel_mode(fixed_mode, adjusted_mode);
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
return false;
return true;
}
@ -432,7 +438,7 @@ void intel_dvo_init(struct drm_i915_private *dev_priv)
int gpio;
bool dvoinit;
enum pipe pipe;
uint32_t dpll[I915_MAX_PIPES];
u32 dpll[I915_MAX_PIPES];
enum port port;
/*

View file

@ -499,7 +499,8 @@ void intel_engine_setup_common(struct intel_engine_cs *engine)
intel_engine_init_cmd_parser(engine);
}
int intel_engine_create_scratch(struct intel_engine_cs *engine, int size)
int intel_engine_create_scratch(struct intel_engine_cs *engine,
unsigned int size)
{
struct drm_i915_gem_object *obj;
struct i915_vma *vma;
@ -533,7 +534,7 @@ int intel_engine_create_scratch(struct intel_engine_cs *engine, int size)
return ret;
}
static void intel_engine_cleanup_scratch(struct intel_engine_cs *engine)
void intel_engine_cleanup_scratch(struct intel_engine_cs *engine)
{
i915_vma_unpin_and_release(&engine->scratch);
}
@ -1076,6 +1077,28 @@ void intel_engines_reset_default_submission(struct drm_i915_private *i915)
engine->set_default_submission(engine);
}
/**
* intel_engines_sanitize: called after the GPU has lost power
* @i915: the i915 device
*
* Anytime we reset the GPU, either with an explicit GPU reset or through a
* PCI power cycle, the GPU loses state and we must reset our state tracking
* to match. Note that calling intel_engines_sanitize() if the GPU has not
* been reset results in much confusion!
*/
void intel_engines_sanitize(struct drm_i915_private *i915)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
GEM_TRACE("\n");
for_each_engine(engine, i915, id) {
if (engine->reset.reset)
engine->reset.reset(engine, NULL);
}
}
/**
* intel_engines_park: called when the GT is transitioning from busy->idle
* @i915: the i915 device
@ -1168,9 +1191,6 @@ void intel_engine_lost_context(struct intel_engine_cs *engine)
lockdep_assert_held(&engine->i915->drm.struct_mutex);
engine->legacy_active_context = NULL;
engine->legacy_active_ppgtt = NULL;
ce = fetch_and_zero(&engine->last_retired_context);
if (ce)
intel_context_unpin(ce);
@ -1260,7 +1280,7 @@ static void hexdump(struct drm_printer *m, const void *buf, size_t len)
rowsize, sizeof(u32),
line, sizeof(line),
false) >= sizeof(line));
drm_printf(m, "%08zx %s\n", pos, line);
drm_printf(m, "[%04zx] %s\n", pos, line);
prev = buf + pos;
skip = false;
@ -1275,6 +1295,8 @@ static void intel_engine_print_registers(const struct intel_engine_cs *engine,
&engine->execlists;
u64 addr;
if (engine->id == RCS && IS_GEN(dev_priv, 4, 7))
drm_printf(m, "\tCCID: 0x%08x\n", I915_READ(CCID));
drm_printf(m, "\tRING_START: 0x%08x\n",
I915_READ(RING_START(engine->mmio_base)));
drm_printf(m, "\tRING_HEAD: 0x%08x\n",
@ -1396,6 +1418,39 @@ static void intel_engine_print_registers(const struct intel_engine_cs *engine,
}
}
static void print_request_ring(struct drm_printer *m, struct i915_request *rq)
{
void *ring;
int size;
drm_printf(m,
"[head %04x, postfix %04x, tail %04x, batch 0x%08x_%08x]:\n",
rq->head, rq->postfix, rq->tail,
rq->batch ? upper_32_bits(rq->batch->node.start) : ~0u,
rq->batch ? lower_32_bits(rq->batch->node.start) : ~0u);
size = rq->tail - rq->head;
if (rq->tail < rq->head)
size += rq->ring->size;
ring = kmalloc(size, GFP_ATOMIC);
if (ring) {
const void *vaddr = rq->ring->vaddr;
unsigned int head = rq->head;
unsigned int len = 0;
if (rq->tail < head) {
len = rq->ring->size - head;
memcpy(ring, vaddr + head, len);
head = 0;
}
memcpy(ring + len, vaddr + head, size - len);
hexdump(m, ring, size);
kfree(ring);
}
}
void intel_engine_dump(struct intel_engine_cs *engine,
struct drm_printer *m,
const char *header, ...)
@ -1446,11 +1501,7 @@ void intel_engine_dump(struct intel_engine_cs *engine,
rq = i915_gem_find_active_request(engine);
if (rq) {
print_request(m, rq, "\t\tactive ");
drm_printf(m,
"\t\t[head %04x, postfix %04x, tail %04x, batch 0x%08x_%08x]\n",
rq->head, rq->postfix, rq->tail,
rq->batch ? upper_32_bits(rq->batch->node.start) : ~0u,
rq->batch ? lower_32_bits(rq->batch->node.start) : ~0u);
drm_printf(m, "\t\tring->start: 0x%08x\n",
i915_ggtt_offset(rq->ring->vma));
drm_printf(m, "\t\tring->head: 0x%08x\n",
@ -1461,6 +1512,8 @@ void intel_engine_dump(struct intel_engine_cs *engine,
rq->ring->emit);
drm_printf(m, "\t\tring->space: 0x%08x\n",
rq->ring->space);
print_request_ring(m, rq);
}
rcu_read_unlock();

View file

@ -203,13 +203,11 @@ void intel_guc_fini(struct intel_guc *guc)
guc_shared_data_destroy(guc);
}
static u32 get_log_control_flags(void)
static u32 guc_ctl_debug_flags(struct intel_guc *guc)
{
u32 level = i915_modparams.guc_log_level;
u32 level = intel_guc_log_get_level(&guc->log);
u32 flags = 0;
GEM_BUG_ON(level < 0);
if (!GUC_LOG_LEVEL_IS_ENABLED(level))
flags |= GUC_LOG_DEFAULT_DISABLED;
@ -219,6 +217,85 @@ static u32 get_log_control_flags(void)
flags |= GUC_LOG_LEVEL_TO_VERBOSITY(level) <<
GUC_LOG_VERBOSITY_SHIFT;
if (USES_GUC_SUBMISSION(guc_to_i915(guc))) {
u32 ads = intel_guc_ggtt_offset(guc, guc->ads_vma)
>> PAGE_SHIFT;
flags |= ads << GUC_ADS_ADDR_SHIFT | GUC_ADS_ENABLED;
}
return flags;
}
static u32 guc_ctl_feature_flags(struct intel_guc *guc)
{
u32 flags = 0;
flags |= GUC_CTL_VCS2_ENABLED;
if (USES_GUC_SUBMISSION(guc_to_i915(guc)))
flags |= GUC_CTL_KERNEL_SUBMISSIONS;
else
flags |= GUC_CTL_DISABLE_SCHEDULER;
return flags;
}
static u32 guc_ctl_ctxinfo_flags(struct intel_guc *guc)
{
u32 flags = 0;
if (USES_GUC_SUBMISSION(guc_to_i915(guc))) {
u32 ctxnum, base;
base = intel_guc_ggtt_offset(guc, guc->stage_desc_pool);
ctxnum = GUC_MAX_STAGE_DESCRIPTORS / 16;
base >>= PAGE_SHIFT;
flags |= (base << GUC_CTL_BASE_ADDR_SHIFT) |
(ctxnum << GUC_CTL_CTXNUM_IN16_SHIFT);
}
return flags;
}
static u32 guc_ctl_log_params_flags(struct intel_guc *guc)
{
u32 offset = intel_guc_ggtt_offset(guc, guc->log.vma) >> PAGE_SHIFT;
u32 flags;
#if (((CRASH_BUFFER_SIZE) % SZ_1M) == 0)
#define UNIT SZ_1M
#define FLAG GUC_LOG_ALLOC_IN_MEGABYTE
#else
#define UNIT SZ_4K
#define FLAG 0
#endif
BUILD_BUG_ON(!CRASH_BUFFER_SIZE);
BUILD_BUG_ON(!IS_ALIGNED(CRASH_BUFFER_SIZE, UNIT));
BUILD_BUG_ON(!DPC_BUFFER_SIZE);
BUILD_BUG_ON(!IS_ALIGNED(DPC_BUFFER_SIZE, UNIT));
BUILD_BUG_ON(!ISR_BUFFER_SIZE);
BUILD_BUG_ON(!IS_ALIGNED(ISR_BUFFER_SIZE, UNIT));
BUILD_BUG_ON((CRASH_BUFFER_SIZE / UNIT - 1) >
(GUC_LOG_CRASH_MASK >> GUC_LOG_CRASH_SHIFT));
BUILD_BUG_ON((DPC_BUFFER_SIZE / UNIT - 1) >
(GUC_LOG_DPC_MASK >> GUC_LOG_DPC_SHIFT));
BUILD_BUG_ON((ISR_BUFFER_SIZE / UNIT - 1) >
(GUC_LOG_ISR_MASK >> GUC_LOG_ISR_SHIFT));
flags = GUC_LOG_VALID |
GUC_LOG_NOTIFY_ON_HALF_FULL |
FLAG |
((CRASH_BUFFER_SIZE / UNIT - 1) << GUC_LOG_CRASH_SHIFT) |
((DPC_BUFFER_SIZE / UNIT - 1) << GUC_LOG_DPC_SHIFT) |
((ISR_BUFFER_SIZE / UNIT - 1) << GUC_LOG_ISR_SHIFT) |
(offset << GUC_LOG_BUF_ADDR_SHIFT);
#undef UNIT
#undef FLAG
return flags;
}
@ -245,32 +322,10 @@ void intel_guc_init_params(struct intel_guc *guc)
params[GUC_CTL_WA] |= GUC_CTL_WA_UK_BY_DRIVER;
params[GUC_CTL_FEATURE] |= GUC_CTL_DISABLE_SCHEDULER |
GUC_CTL_VCS2_ENABLED;
params[GUC_CTL_LOG_PARAMS] = guc->log.flags;
params[GUC_CTL_DEBUG] = get_log_control_flags();
/* If GuC submission is enabled, set up additional parameters here */
if (USES_GUC_SUBMISSION(dev_priv)) {
u32 ads = intel_guc_ggtt_offset(guc,
guc->ads_vma) >> PAGE_SHIFT;
u32 pgs = intel_guc_ggtt_offset(guc, guc->stage_desc_pool);
u32 ctx_in_16 = GUC_MAX_STAGE_DESCRIPTORS / 16;
params[GUC_CTL_DEBUG] |= ads << GUC_ADS_ADDR_SHIFT;
params[GUC_CTL_DEBUG] |= GUC_ADS_ENABLED;
pgs >>= PAGE_SHIFT;
params[GUC_CTL_CTXINFO] = (pgs << GUC_CTL_BASE_ADDR_SHIFT) |
(ctx_in_16 << GUC_CTL_CTXNUM_IN16_SHIFT);
params[GUC_CTL_FEATURE] |= GUC_CTL_KERNEL_SUBMISSIONS;
/* Unmask this bit to enable the GuC's internal scheduler */
params[GUC_CTL_FEATURE] &= ~GUC_CTL_DISABLE_SCHEDULER;
}
params[GUC_CTL_FEATURE] = guc_ctl_feature_flags(guc);
params[GUC_CTL_LOG_PARAMS] = guc_ctl_log_params_flags(guc);
params[GUC_CTL_DEBUG] = guc_ctl_debug_flags(guc);
params[GUC_CTL_CTXINFO] = guc_ctl_ctxinfo_flags(guc);
/*
* All SOFT_SCRATCH registers are in FORCEWAKE_BLITTER domain and

View file

@ -84,12 +84,12 @@
#define GUC_LOG_VALID (1 << 0)
#define GUC_LOG_NOTIFY_ON_HALF_FULL (1 << 1)
#define GUC_LOG_ALLOC_IN_MEGABYTE (1 << 3)
#define GUC_LOG_CRASH_PAGES 1
#define GUC_LOG_CRASH_SHIFT 4
#define GUC_LOG_DPC_PAGES 7
#define GUC_LOG_CRASH_MASK (0x1 << GUC_LOG_CRASH_SHIFT)
#define GUC_LOG_DPC_SHIFT 6
#define GUC_LOG_ISR_PAGES 7
#define GUC_LOG_DPC_MASK (0x7 << GUC_LOG_DPC_SHIFT)
#define GUC_LOG_ISR_SHIFT 9
#define GUC_LOG_ISR_MASK (0x7 << GUC_LOG_ISR_SHIFT)
#define GUC_LOG_BUF_ADDR_SHIFT 12
#define GUC_CTL_PAGE_FAULT_CONTROL 5
@ -532,20 +532,6 @@ enum guc_log_buffer_type {
};
/**
* DOC: GuC Log buffer Layout
*
* Page0 +-------------------------------+
* | ISR state header (32 bytes) |
* | DPC state header |
* | Crash dump state header |
* Page1 +-------------------------------+
* | ISR logs |
* Page9 +-------------------------------+
* | DPC logs |
* Page17 +-------------------------------+
* | Crash Dump logs |
* +-------------------------------+
*
* Below state structure is used for coordination of retrieval of GuC firmware
* logs. Separate state is maintained for each log buffer type.
* read_ptr points to the location where i915 read last in log buffer and

View file

@ -215,11 +215,11 @@ static unsigned int guc_get_log_buffer_size(enum guc_log_buffer_type type)
{
switch (type) {
case GUC_ISR_LOG_BUFFER:
return (GUC_LOG_ISR_PAGES + 1) * PAGE_SIZE;
return ISR_BUFFER_SIZE;
case GUC_DPC_LOG_BUFFER:
return (GUC_LOG_DPC_PAGES + 1) * PAGE_SIZE;
return DPC_BUFFER_SIZE;
case GUC_CRASH_DUMP_LOG_BUFFER:
return (GUC_LOG_CRASH_PAGES + 1) * PAGE_SIZE;
return CRASH_BUFFER_SIZE;
default:
MISSING_CASE(type);
}
@ -397,7 +397,7 @@ static int guc_log_relay_create(struct intel_guc_log *log)
lockdep_assert_held(&log->relay.lock);
/* Keep the size of sub buffers same as shared log buffer */
subbuf_size = GUC_LOG_SIZE;
subbuf_size = log->vma->size;
/*
* Store up to 8 snapshots, which is large enough to buffer sufficient
@ -452,13 +452,34 @@ int intel_guc_log_create(struct intel_guc_log *log)
{
struct intel_guc *guc = log_to_guc(log);
struct i915_vma *vma;
unsigned long offset;
u32 flags;
u32 guc_log_size;
int ret;
GEM_BUG_ON(log->vma);
vma = intel_guc_allocate_vma(guc, GUC_LOG_SIZE);
/*
* GuC Log buffer Layout
*
* +===============================+ 00B
* | Crash dump state header |
* +-------------------------------+ 32B
* | DPC state header |
* +-------------------------------+ 64B
* | ISR state header |
* +-------------------------------+ 96B
* | |
* +===============================+ PAGE_SIZE (4KB)
* | Crash Dump logs |
* +===============================+ + CRASH_SIZE
* | DPC logs |
* +===============================+ + DPC_SIZE
* | ISR logs |
* +===============================+ + ISR_SIZE
*/
guc_log_size = PAGE_SIZE + CRASH_BUFFER_SIZE + DPC_BUFFER_SIZE +
ISR_BUFFER_SIZE;
vma = intel_guc_allocate_vma(guc, guc_log_size);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
goto err;
@ -466,20 +487,12 @@ int intel_guc_log_create(struct intel_guc_log *log)
log->vma = vma;
/* each allocated unit is a page */
flags = GUC_LOG_VALID | GUC_LOG_NOTIFY_ON_HALF_FULL |
(GUC_LOG_DPC_PAGES << GUC_LOG_DPC_SHIFT) |
(GUC_LOG_ISR_PAGES << GUC_LOG_ISR_SHIFT) |
(GUC_LOG_CRASH_PAGES << GUC_LOG_CRASH_SHIFT);
offset = intel_guc_ggtt_offset(guc, vma) >> PAGE_SHIFT;
log->flags = (offset << GUC_LOG_BUF_ADDR_SHIFT) | flags;
log->level = i915_modparams.guc_log_level;
return 0;
err:
/* logging will be off */
i915_modparams.guc_log_level = 0;
DRM_ERROR("Failed to allocate GuC log buffer. %d\n", ret);
return ret;
}
@ -488,15 +501,7 @@ void intel_guc_log_destroy(struct intel_guc_log *log)
i915_vma_unpin_and_release(&log->vma);
}
int intel_guc_log_level_get(struct intel_guc_log *log)
{
GEM_BUG_ON(!log->vma);
GEM_BUG_ON(i915_modparams.guc_log_level < 0);
return i915_modparams.guc_log_level;
}
int intel_guc_log_level_set(struct intel_guc_log *log, u64 val)
int intel_guc_log_set_level(struct intel_guc_log *log, u32 level)
{
struct intel_guc *guc = log_to_guc(log);
struct drm_i915_private *dev_priv = guc_to_i915(guc);
@ -504,33 +509,32 @@ int intel_guc_log_level_set(struct intel_guc_log *log, u64 val)
BUILD_BUG_ON(GUC_LOG_VERBOSITY_MIN != 0);
GEM_BUG_ON(!log->vma);
GEM_BUG_ON(i915_modparams.guc_log_level < 0);
/*
* GuC is recognizing log levels starting from 0 to max, we're using 0
* as indication that logging should be disabled.
*/
if (val < GUC_LOG_LEVEL_DISABLED || val > GUC_LOG_LEVEL_MAX)
if (level < GUC_LOG_LEVEL_DISABLED || level > GUC_LOG_LEVEL_MAX)
return -EINVAL;
mutex_lock(&dev_priv->drm.struct_mutex);
if (i915_modparams.guc_log_level == val) {
if (log->level == level) {
ret = 0;
goto out_unlock;
}
intel_runtime_pm_get(dev_priv);
ret = guc_action_control_log(guc, GUC_LOG_LEVEL_IS_VERBOSE(val),
GUC_LOG_LEVEL_IS_ENABLED(val),
GUC_LOG_LEVEL_TO_VERBOSITY(val));
ret = guc_action_control_log(guc, GUC_LOG_LEVEL_IS_VERBOSE(level),
GUC_LOG_LEVEL_IS_ENABLED(level),
GUC_LOG_LEVEL_TO_VERBOSITY(level));
intel_runtime_pm_put(dev_priv);
if (ret) {
DRM_DEBUG_DRIVER("guc_log_control action failed %d\n", ret);
goto out_unlock;
}
i915_modparams.guc_log_level = val;
log->level = level;
out_unlock:
mutex_unlock(&dev_priv->drm.struct_mutex);

View file

@ -30,15 +30,19 @@
#include <linux/workqueue.h>
#include "intel_guc_fwif.h"
#include "i915_gem.h"
struct intel_guc;
/*
* The first page is to save log buffer state. Allocate one
* extra page for others in case for overlap
*/
#define GUC_LOG_SIZE ((1 + GUC_LOG_DPC_PAGES + 1 + GUC_LOG_ISR_PAGES + \
1 + GUC_LOG_CRASH_PAGES + 1) << PAGE_SHIFT)
#ifdef CONFIG_DRM_I915_DEBUG_GUC
#define CRASH_BUFFER_SIZE SZ_2M
#define DPC_BUFFER_SIZE SZ_8M
#define ISR_BUFFER_SIZE SZ_8M
#else
#define CRASH_BUFFER_SIZE SZ_8K
#define DPC_BUFFER_SIZE SZ_32K
#define ISR_BUFFER_SIZE SZ_32K
#endif
/*
* While we're using plain log level in i915, GuC controls are much more...
@ -58,7 +62,7 @@ struct intel_guc;
#define GUC_LOG_LEVEL_MAX GUC_VERBOSITY_TO_LOG_LEVEL(GUC_LOG_VERBOSITY_MAX)
struct intel_guc_log {
u32 flags;
u32 level;
struct i915_vma *vma;
struct {
void *buf_addr;
@ -80,8 +84,7 @@ void intel_guc_log_init_early(struct intel_guc_log *log);
int intel_guc_log_create(struct intel_guc_log *log);
void intel_guc_log_destroy(struct intel_guc_log *log);
int intel_guc_log_level_get(struct intel_guc_log *log);
int intel_guc_log_level_set(struct intel_guc_log *log, u64 control_val);
int intel_guc_log_set_level(struct intel_guc_log *log, u32 level);
bool intel_guc_log_relay_enabled(const struct intel_guc_log *log);
int intel_guc_log_relay_open(struct intel_guc_log *log);
void intel_guc_log_relay_flush(struct intel_guc_log *log);
@ -89,4 +92,9 @@ void intel_guc_log_relay_close(struct intel_guc_log *log);
void intel_guc_log_handle_flush_event(struct intel_guc_log *log);
static inline u32 intel_guc_log_get_level(struct intel_guc_log *log)
{
return log->level;
}
#endif

View file

@ -47,6 +47,8 @@ static bool is_supported_device(struct drm_i915_private *dev_priv)
return true;
if (IS_KABYLAKE(dev_priv))
return true;
if (IS_BROXTON(dev_priv))
return true;
return false;
}

View file

@ -294,6 +294,7 @@ static void hangcheck_store_sample(struct intel_engine_cs *engine,
engine->hangcheck.seqno = hc->seqno;
engine->hangcheck.action = hc->action;
engine->hangcheck.stalled = hc->stalled;
engine->hangcheck.wedged = hc->wedged;
}
static enum intel_engine_hangcheck_action
@ -368,6 +369,9 @@ static void hangcheck_accumulate_sample(struct intel_engine_cs *engine,
hc->stalled = time_after(jiffies,
engine->hangcheck.action_timestamp + timeout);
hc->wedged = time_after(jiffies,
engine->hangcheck.action_timestamp +
I915_ENGINE_WEDGED_TIMEOUT);
}
static void hangcheck_declare_hang(struct drm_i915_private *i915,
@ -409,7 +413,7 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
gpu_error.hangcheck_work.work);
struct intel_engine_cs *engine;
enum intel_engine_id id;
unsigned int hung = 0, stuck = 0;
unsigned int hung = 0, stuck = 0, wedged = 0;
if (!i915_modparams.enable_hangcheck)
return;
@ -440,6 +444,17 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
if (hc.action != ENGINE_DEAD)
stuck |= intel_engine_flag(engine);
}
if (engine->hangcheck.wedged)
wedged |= intel_engine_flag(engine);
}
if (wedged) {
dev_err(dev_priv->drm.dev,
"GPU recovery timed out,"
" cancelling all in-flight rendering.\n");
GEM_TRACE_DUMP();
i915_gem_set_wedged(dev_priv);
}
if (hung)

View file

@ -51,7 +51,7 @@ assert_hdmi_port_disabled(struct intel_hdmi *intel_hdmi)
{
struct drm_device *dev = intel_hdmi_to_dev(intel_hdmi);
struct drm_i915_private *dev_priv = to_i915(dev);
uint32_t enabled_bits;
u32 enabled_bits;
enabled_bits = HAS_DDI(dev_priv) ? DDI_BUF_CTL_ENABLE : SDVO_ENABLE;
@ -59,6 +59,15 @@ assert_hdmi_port_disabled(struct intel_hdmi *intel_hdmi)
"HDMI port enabled, expecting disabled\n");
}
static void
assert_hdmi_transcoder_func_disabled(struct drm_i915_private *dev_priv,
enum transcoder cpu_transcoder)
{
WARN(I915_READ(TRANS_DDI_FUNC_CTL(cpu_transcoder)) &
TRANS_DDI_FUNC_ENABLE,
"HDMI transcoder function enabled, expecting disabled\n");
}
struct intel_hdmi *enc_to_intel_hdmi(struct drm_encoder *encoder)
{
struct intel_digital_port *intel_dig_port =
@ -144,7 +153,7 @@ static void g4x_write_infoframe(struct drm_encoder *encoder,
unsigned int type,
const void *frame, ssize_t len)
{
const uint32_t *data = frame;
const u32 *data = frame;
struct drm_device *dev = encoder->dev;
struct drm_i915_private *dev_priv = to_i915(dev);
u32 val = I915_READ(VIDEO_DIP_CTL);
@ -199,7 +208,7 @@ static void ibx_write_infoframe(struct drm_encoder *encoder,
unsigned int type,
const void *frame, ssize_t len)
{
const uint32_t *data = frame;
const u32 *data = frame;
struct drm_device *dev = encoder->dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
@ -259,7 +268,7 @@ static void cpt_write_infoframe(struct drm_encoder *encoder,
unsigned int type,
const void *frame, ssize_t len)
{
const uint32_t *data = frame;
const u32 *data = frame;
struct drm_device *dev = encoder->dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
@ -317,7 +326,7 @@ static void vlv_write_infoframe(struct drm_encoder *encoder,
unsigned int type,
const void *frame, ssize_t len)
{
const uint32_t *data = frame;
const u32 *data = frame;
struct drm_device *dev = encoder->dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
@ -376,19 +385,16 @@ static void hsw_write_infoframe(struct drm_encoder *encoder,
unsigned int type,
const void *frame, ssize_t len)
{
const uint32_t *data = frame;
const u32 *data = frame;
struct drm_device *dev = encoder->dev;
struct drm_i915_private *dev_priv = to_i915(dev);
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
i915_reg_t ctl_reg = HSW_TVIDEO_DIP_CTL(cpu_transcoder);
i915_reg_t data_reg;
int data_size = type == DP_SDP_VSC ?
VIDEO_DIP_VSC_DATA_SIZE : VIDEO_DIP_DATA_SIZE;
int i;
u32 val = I915_READ(ctl_reg);
data_reg = hsw_dip_data_reg(dev_priv, cpu_transcoder, type, 0);
val &= ~hsw_infoframe_enable(type);
I915_WRITE(ctl_reg, val);
@ -442,7 +448,7 @@ static void intel_write_infoframe(struct drm_encoder *encoder,
union hdmi_infoframe *frame)
{
struct intel_digital_port *intel_dig_port = enc_to_dig_port(encoder);
uint8_t buffer[VIDEO_DIP_DATA_SIZE];
u8 buffer[VIDEO_DIP_DATA_SIZE];
ssize_t len;
/* see comment above for the reason for this offset */
@ -838,11 +844,11 @@ static void hsw_set_infoframes(struct drm_encoder *encoder,
const struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->dev);
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(encoder);
i915_reg_t reg = HSW_TVIDEO_DIP_CTL(crtc_state->cpu_transcoder);
u32 val = I915_READ(reg);
assert_hdmi_port_disabled(intel_hdmi);
assert_hdmi_transcoder_func_disabled(dev_priv,
crtc_state->cpu_transcoder);
val &= ~(VIDEO_DIP_ENABLE_VSC_HSW | VIDEO_DIP_ENABLE_AVI_HSW |
VIDEO_DIP_ENABLE_GCP_HSW | VIDEO_DIP_ENABLE_VS_HSW |
@ -1544,6 +1550,9 @@ intel_hdmi_mode_valid(struct drm_connector *connector,
bool force_dvi =
READ_ONCE(to_intel_digital_connector_state(connector->state)->force_audio) == HDMI_AUDIO_OFF_DVI;
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
return MODE_NO_DBLESCAN;
clock = mode->clock;
if ((mode->flags & DRM_MODE_FLAG_3D_MASK) == DRM_MODE_FLAG_3D_FRAME_PACKING)
@ -1561,14 +1570,23 @@ intel_hdmi_mode_valid(struct drm_connector *connector,
/* check if we can do 8bpc */
status = hdmi_port_clock_valid(hdmi, clock, true, force_dvi);
/* if we can't do 8bpc we may still be able to do 12bpc */
if (!HAS_GMCH_DISPLAY(dev_priv) && status != MODE_OK && hdmi->has_hdmi_sink && !force_dvi)
status = hdmi_port_clock_valid(hdmi, clock * 3 / 2, true, force_dvi);
if (hdmi->has_hdmi_sink && !force_dvi) {
/* if we can't do 8bpc we may still be able to do 12bpc */
if (status != MODE_OK && !HAS_GMCH_DISPLAY(dev_priv))
status = hdmi_port_clock_valid(hdmi, clock * 3 / 2,
true, force_dvi);
/* if we can't do 8,12bpc we may still be able to do 10bpc */
if (status != MODE_OK && INTEL_GEN(dev_priv) >= 11)
status = hdmi_port_clock_valid(hdmi, clock * 5 / 4,
true, force_dvi);
}
return status;
}
static bool hdmi_12bpc_possible(const struct intel_crtc_state *crtc_state)
static bool hdmi_deep_color_possible(const struct intel_crtc_state *crtc_state,
int bpc)
{
struct drm_i915_private *dev_priv =
to_i915(crtc_state->base.crtc->dev);
@ -1580,6 +1598,9 @@ static bool hdmi_12bpc_possible(const struct intel_crtc_state *crtc_state)
if (HAS_GMCH_DISPLAY(dev_priv))
return false;
if (bpc == 10 && INTEL_GEN(dev_priv) < 11)
return false;
if (crtc_state->pipe_bpp <= 8*3)
return false;
@ -1587,7 +1608,7 @@ static bool hdmi_12bpc_possible(const struct intel_crtc_state *crtc_state)
return false;
/*
* HDMI 12bpc affects the clocks, so it's only possible
* HDMI deep color affects the clocks, so it's only possible
* when not cloning with other encoder types.
*/
if (crtc_state->output_types != 1 << INTEL_OUTPUT_HDMI)
@ -1602,16 +1623,24 @@ static bool hdmi_12bpc_possible(const struct intel_crtc_state *crtc_state)
if (crtc_state->ycbcr420) {
const struct drm_hdmi_info *hdmi = &info->hdmi;
if (!(hdmi->y420_dc_modes & DRM_EDID_YCBCR420_DC_36))
if (bpc == 12 && !(hdmi->y420_dc_modes &
DRM_EDID_YCBCR420_DC_36))
return false;
else if (bpc == 10 && !(hdmi->y420_dc_modes &
DRM_EDID_YCBCR420_DC_30))
return false;
} else {
if (!(info->edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_36))
if (bpc == 12 && !(info->edid_hdmi_dc_modes &
DRM_EDID_HDMI_DC_36))
return false;
else if (bpc == 10 && !(info->edid_hdmi_dc_modes &
DRM_EDID_HDMI_DC_30))
return false;
}
}
/* Display WA #1139: glk */
if (IS_GLK_REVID(dev_priv, 0, GLK_REVID_A1) &&
if (bpc == 12 && IS_GLK_REVID(dev_priv, 0, GLK_REVID_A1) &&
crtc_state->base.adjusted_mode.htotal > 5460)
return false;
@ -1621,7 +1650,8 @@ static bool hdmi_12bpc_possible(const struct intel_crtc_state *crtc_state)
static bool
intel_hdmi_ycbcr420_config(struct drm_connector *connector,
struct intel_crtc_state *config,
int *clock_12bpc, int *clock_8bpc)
int *clock_12bpc, int *clock_10bpc,
int *clock_8bpc)
{
struct intel_crtc *intel_crtc = to_intel_crtc(config->base.crtc);
@ -1633,6 +1663,7 @@ intel_hdmi_ycbcr420_config(struct drm_connector *connector,
/* YCBCR420 TMDS rate requirement is half the pixel clock */
config->port_clock /= 2;
*clock_12bpc /= 2;
*clock_10bpc /= 2;
*clock_8bpc /= 2;
config->ycbcr420 = true;
@ -1660,10 +1691,14 @@ bool intel_hdmi_compute_config(struct intel_encoder *encoder,
struct intel_digital_connector_state *intel_conn_state =
to_intel_digital_connector_state(conn_state);
int clock_8bpc = pipe_config->base.adjusted_mode.crtc_clock;
int clock_10bpc = clock_8bpc * 5 / 4;
int clock_12bpc = clock_8bpc * 3 / 2;
int desired_bpp;
bool force_dvi = intel_conn_state->force_audio == HDMI_AUDIO_OFF_DVI;
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
return false;
pipe_config->has_hdmi_sink = !force_dvi && intel_hdmi->has_hdmi_sink;
if (pipe_config->has_hdmi_sink)
@ -1683,12 +1718,14 @@ bool intel_hdmi_compute_config(struct intel_encoder *encoder,
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLCLK) {
pipe_config->pixel_multiplier = 2;
clock_8bpc *= 2;
clock_10bpc *= 2;
clock_12bpc *= 2;
}
if (drm_mode_is_420_only(&connector->display_info, adjusted_mode)) {
if (!intel_hdmi_ycbcr420_config(connector, pipe_config,
&clock_12bpc, &clock_8bpc)) {
&clock_12bpc, &clock_10bpc,
&clock_8bpc)) {
DRM_ERROR("Can't support YCBCR420 output\n");
return false;
}
@ -1706,18 +1743,25 @@ bool intel_hdmi_compute_config(struct intel_encoder *encoder,
}
/*
* HDMI is either 12 or 8, so if the display lets 10bpc sneak
* through, clamp it down. Note that g4x/vlv don't support 12bpc hdmi
* outputs. We also need to check that the higher clock still fits
* within limits.
* Note that g4x/vlv don't support 12bpc hdmi outputs. We also need
* to check that the higher clock still fits within limits.
*/
if (hdmi_12bpc_possible(pipe_config) &&
hdmi_port_clock_valid(intel_hdmi, clock_12bpc, true, force_dvi) == MODE_OK) {
if (hdmi_deep_color_possible(pipe_config, 12) &&
hdmi_port_clock_valid(intel_hdmi, clock_12bpc,
true, force_dvi) == MODE_OK) {
DRM_DEBUG_KMS("picking bpc to 12 for HDMI output\n");
desired_bpp = 12*3;
/* Need to adjust the port link by 1.5x for 12bpc. */
pipe_config->port_clock = clock_12bpc;
} else if (hdmi_deep_color_possible(pipe_config, 10) &&
hdmi_port_clock_valid(intel_hdmi, clock_10bpc,
true, force_dvi) == MODE_OK) {
DRM_DEBUG_KMS("picking bpc to 10 for HDMI output\n");
desired_bpp = 10 * 3;
/* Need to adjust the port link by 1.25x for 10bpc. */
pipe_config->port_clock = clock_10bpc;
} else {
DRM_DEBUG_KMS("picking bpc to 8 for HDMI output\n");
desired_bpp = 8*3;
@ -2239,7 +2283,7 @@ static u8 intel_hdmi_ddc_pin(struct drm_i915_private *dev_priv,
ddc_pin = bxt_port_to_ddc_pin(dev_priv, port);
else if (HAS_PCH_CNP(dev_priv))
ddc_pin = cnp_port_to_ddc_pin(dev_priv, port);
else if (IS_ICELAKE(dev_priv))
else if (HAS_PCH_ICP(dev_priv))
ddc_pin = icl_port_to_ddc_pin(dev_priv, port);
else
ddc_pin = g4x_port_to_ddc_pin(dev_priv, port);
@ -2334,7 +2378,7 @@ void intel_hdmi_init_connector(struct intel_digital_port *intel_dig_port,
* 0xd. Failure to do so will result in spurious interrupts being
* generated on the port when a cable is not attached.
*/
if (IS_G4X(dev_priv) && !IS_GM45(dev_priv)) {
if (IS_G45(dev_priv)) {
u32 temp = I915_READ(PEG_BAND_GAP_DATA);
I915_WRITE(PEG_BAND_GAP_DATA, (temp & ~0xf) | 0xd);
}

View file

@ -77,12 +77,12 @@ static const struct gmbus_pin gmbus_pins_cnp[] = {
};
static const struct gmbus_pin gmbus_pins_icp[] = {
[GMBUS_PIN_1_BXT] = { "dpa", GPIOA },
[GMBUS_PIN_2_BXT] = { "dpb", GPIOB },
[GMBUS_PIN_9_TC1_ICP] = { "tc1", GPIOC },
[GMBUS_PIN_10_TC2_ICP] = { "tc2", GPIOD },
[GMBUS_PIN_11_TC3_ICP] = { "tc3", GPIOE },
[GMBUS_PIN_12_TC4_ICP] = { "tc4", GPIOF },
[GMBUS_PIN_1_BXT] = { "dpa", GPIOB },
[GMBUS_PIN_2_BXT] = { "dpb", GPIOC },
[GMBUS_PIN_9_TC1_ICP] = { "tc1", GPIOJ },
[GMBUS_PIN_10_TC2_ICP] = { "tc2", GPIOK },
[GMBUS_PIN_11_TC3_ICP] = { "tc3", GPIOL },
[GMBUS_PIN_12_TC4_ICP] = { "tc4", GPIOM },
};
/* pin is expected to be valid */
@ -771,7 +771,7 @@ int intel_setup_gmbus(struct drm_i915_private *dev_priv)
unsigned int pin;
int ret;
if (HAS_PCH_NOP(dev_priv))
if (INTEL_INFO(dev_priv)->num_pipes == 0)
return 0;
if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))

View file

@ -970,22 +970,19 @@ static void process_csb(struct intel_engine_cs *engine)
&engine->status_page.page_addr[I915_HWS_CSB_BUF0_INDEX];
unsigned int head, tail;
if (unlikely(execlists->csb_use_mmio)) {
buf = (u32 * __force)
(i915->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_BUF_LO(engine, 0)));
execlists->csb_head = -1; /* force mmio read of CSB */
}
/* Clear before reading to catch new interrupts */
clear_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted);
smp_mb__after_atomic();
if (unlikely(execlists->csb_head == -1)) { /* after a reset */
if (unlikely(execlists->csb_use_mmio)) {
if (!fw) {
intel_uncore_forcewake_get(i915, execlists->fw_domains);
fw = true;
}
buf = (u32 * __force)
(i915->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_BUF_LO(engine, 0)));
head = readl(i915->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_PTR(engine)));
tail = GEN8_CSB_WRITE_PTR(head);
head = GEN8_CSB_READ_PTR(head);
@ -1413,6 +1410,7 @@ __execlists_context_pin(struct intel_engine_cs *engine,
ce->lrc_reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE;
ce->lrc_reg_state[CTX_RING_BUFFER_START+1] =
i915_ggtt_offset(ce->ring->vma);
GEM_BUG_ON(!intel_ring_offset_valid(ce->ring, ce->ring->head));
ce->lrc_reg_state[CTX_RING_HEAD+1] = ce->ring->head;
ce->state->obj->pin_global++;
@ -1567,19 +1565,56 @@ static u32 *gen8_init_indirectctx_bb(struct intel_engine_cs *engine, u32 *batch)
return batch;
}
struct lri {
i915_reg_t reg;
u32 value;
};
static u32 *emit_lri(u32 *batch, const struct lri *lri, unsigned int count)
{
GEM_BUG_ON(!count || count > 63);
*batch++ = MI_LOAD_REGISTER_IMM(count);
do {
*batch++ = i915_mmio_reg_offset(lri->reg);
*batch++ = lri->value;
} while (lri++, --count);
*batch++ = MI_NOOP;
return batch;
}
static u32 *gen9_init_indirectctx_bb(struct intel_engine_cs *engine, u32 *batch)
{
static const struct lri lri[] = {
/* WaDisableGatherAtSetShaderCommonSlice:skl,bxt,kbl,glk */
{
COMMON_SLICE_CHICKEN2,
__MASKED_FIELD(GEN9_DISABLE_GATHER_AT_SET_SHADER_COMMON_SLICE,
0),
},
/* BSpec: 11391 */
{
FF_SLICE_CHICKEN,
__MASKED_FIELD(FF_SLICE_CHICKEN_CL_PROVOKING_VERTEX_FIX,
FF_SLICE_CHICKEN_CL_PROVOKING_VERTEX_FIX),
},
/* BSpec: 11299 */
{
_3D_CHICKEN3,
__MASKED_FIELD(_3D_CHICKEN_SF_PROVOKING_VERTEX_FIX,
_3D_CHICKEN_SF_PROVOKING_VERTEX_FIX),
}
};
*batch++ = MI_ARB_ON_OFF | MI_ARB_DISABLE;
/* WaFlushCoherentL3CacheLinesAtContextSwitch:skl,bxt,glk */
batch = gen8_emit_flush_coherentl3_wa(engine, batch);
/* WaDisableGatherAtSetShaderCommonSlice:skl,bxt,kbl,glk */
*batch++ = MI_LOAD_REGISTER_IMM(1);
*batch++ = i915_mmio_reg_offset(COMMON_SLICE_CHICKEN2);
*batch++ = _MASKED_BIT_DISABLE(
GEN9_DISABLE_GATHER_AT_SET_SHADER_COMMON_SLICE);
*batch++ = MI_NOOP;
batch = emit_lri(batch, lri, ARRAY_SIZE(lri));
/* WaClearSlmSpaceAtContextSwitch:kbl */
/* Actual scratch location is at 128 bytes offset */
@ -1809,7 +1844,6 @@ static bool unexpected_starting_state(struct intel_engine_cs *engine)
static int gen8_init_common_ring(struct intel_engine_cs *engine)
{
struct intel_engine_execlists * const execlists = &engine->execlists;
int ret;
ret = intel_mocs_init_engine(engine);
@ -1827,10 +1861,6 @@ static int gen8_init_common_ring(struct intel_engine_cs *engine)
enable_execlists(engine);
/* After a GPU reset, we may have requests to replay */
if (execlists->first)
tasklet_schedule(&execlists->tasklet);
return 0;
}
@ -1964,7 +1994,7 @@ static void execlists_reset(struct intel_engine_cs *engine,
spin_unlock(&engine->timeline.lock);
/* Following the reset, we need to reload the CSB read/write pointers */
engine->execlists.csb_head = -1;
engine->execlists.csb_head = GEN8_CSB_ENTRIES - 1;
local_irq_restore(flags);
@ -2001,9 +2031,10 @@ static void execlists_reset(struct intel_engine_cs *engine,
/* Move the RING_HEAD onto the breadcrumb, past the hanging batch */
regs[CTX_RING_BUFFER_START + 1] = i915_ggtt_offset(request->ring->vma);
regs[CTX_RING_HEAD + 1] = request->postfix;
request->ring->head = request->postfix;
request->ring->head = intel_ring_wrap(request->ring, request->postfix);
regs[CTX_RING_HEAD + 1] = request->ring->head;
intel_ring_update_space(request->ring);
/* Reset WaIdleLiteRestore:bdw,skl as well */
@ -2012,6 +2043,12 @@ static void execlists_reset(struct intel_engine_cs *engine,
static void execlists_reset_finish(struct intel_engine_cs *engine)
{
struct intel_engine_execlists * const execlists = &engine->execlists;
/* After a GPU reset, we may have requests to replay */
if (execlists->first)
tasklet_schedule(&execlists->tasklet);
/*
* Flush the tasklet while we still have the forcewake to be sure
* that it is not allowed to sleep before we restart and reload a
@ -2021,7 +2058,7 @@ static void execlists_reset_finish(struct intel_engine_cs *engine)
* serialising multiple attempts to reset so that we know that we
* are the only one manipulating tasklet state.
*/
__tasklet_enable_sync_once(&engine->execlists.tasklet);
__tasklet_enable_sync_once(&execlists->tasklet);
GEM_TRACE("%s\n", engine->name);
}
@ -2465,7 +2502,7 @@ static int logical_ring_init(struct intel_engine_cs *engine)
upper_32_bits(ce->lrc_desc);
}
engine->execlists.csb_head = -1;
engine->execlists.csb_head = GEN8_CSB_ENTRIES - 1;
return 0;
@ -2768,10 +2805,8 @@ static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
context_size += LRC_HEADER_PAGES * PAGE_SIZE;
ctx_obj = i915_gem_object_create(ctx->i915, context_size);
if (IS_ERR(ctx_obj)) {
ret = PTR_ERR(ctx_obj);
goto error_deref_obj;
}
if (IS_ERR(ctx_obj))
return PTR_ERR(ctx_obj);
vma = i915_vma_instance(ctx_obj, &ctx->i915->ggtt.vm, NULL);
if (IS_ERR(vma)) {

View file

@ -116,7 +116,7 @@ static int lspcon_change_mode(struct intel_lspcon *lspcon,
static bool lspcon_wake_native_aux_ch(struct intel_lspcon *lspcon)
{
uint8_t rev;
u8 rev;
if (drm_dp_dpcd_readb(&lspcon_to_intel_dp(lspcon)->aux, DP_DPCD_REV,
&rev) != 1) {

View file

@ -378,6 +378,8 @@ intel_lvds_mode_valid(struct drm_connector *connector,
struct drm_display_mode *fixed_mode = intel_connector->panel.fixed_mode;
int max_pixclk = to_i915(connector->dev)->max_dotclk_freq;
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
return MODE_NO_DBLESCAN;
if (mode->hdisplay > fixed_mode->hdisplay)
return MODE_PANEL;
if (mode->vdisplay > fixed_mode->vdisplay)
@ -427,6 +429,9 @@ static bool intel_lvds_compute_config(struct intel_encoder *intel_encoder,
intel_fixed_panel_mode(intel_connector->panel.fixed_mode,
adjusted_mode);
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
return false;
if (HAS_PCH_SPLIT(dev_priv)) {
pipe_config->has_pch_encoder = true;

View file

@ -608,16 +608,16 @@ void intel_opregion_asle_intr(struct drm_i915_private *dev_priv)
#define ACPI_EV_LID (1<<1)
#define ACPI_EV_DOCK (1<<2)
static struct intel_opregion *system_opregion;
/*
* The only video events relevant to opregion are 0x80. These indicate either a
* docking event, lid switch or display switch request. In Linux, these are
* handled by the dock, button and video drivers.
*/
static int intel_opregion_video_event(struct notifier_block *nb,
unsigned long val, void *data)
{
/* The only video events relevant to opregion are 0x80. These indicate
either a docking event, lid switch or display switch request. In
Linux, these are handled by the dock, button and video drivers.
*/
struct intel_opregion *opregion = container_of(nb, struct intel_opregion,
acpi_notifier);
struct acpi_bus_event *event = data;
struct opregion_acpi *acpi;
int ret = NOTIFY_OK;
@ -625,10 +625,7 @@ static int intel_opregion_video_event(struct notifier_block *nb,
if (strcmp(event->device_class, ACPI_VIDEO_CLASS) != 0)
return NOTIFY_DONE;
if (!system_opregion)
return NOTIFY_DONE;
acpi = system_opregion->acpi;
acpi = opregion->acpi;
if (event->type == 0x80 && ((acpi->cevt & 1) == 0))
ret = NOTIFY_BAD;
@ -638,10 +635,6 @@ static int intel_opregion_video_event(struct notifier_block *nb,
return ret;
}
static struct notifier_block intel_opregion_notifier = {
.notifier_call = intel_opregion_video_event,
};
/*
* Initialise the DIDL field in opregion. This passes a list of devices to
* the firmware. Values are defined by section B.4.2 of the ACPI specification
@ -797,8 +790,8 @@ void intel_opregion_register(struct drm_i915_private *dev_priv)
opregion->acpi->csts = 0;
opregion->acpi->drdy = 1;
system_opregion = opregion;
register_acpi_notifier(&intel_opregion_notifier);
opregion->acpi_notifier.notifier_call = intel_opregion_video_event;
register_acpi_notifier(&opregion->acpi_notifier);
}
if (opregion->asle) {
@ -822,8 +815,8 @@ void intel_opregion_unregister(struct drm_i915_private *dev_priv)
if (opregion->acpi) {
opregion->acpi->drdy = 0;
system_opregion = NULL;
unregister_acpi_notifier(&intel_opregion_notifier);
unregister_acpi_notifier(&opregion->acpi_notifier);
opregion->acpi_notifier.notifier_call = NULL;
}
/* just clear all opregion memory pointers now */

View file

@ -49,6 +49,7 @@ struct intel_opregion {
u32 vbt_size;
u32 *lid_state;
struct work_struct asle_work;
struct notifier_block acpi_notifier;
};
#define OPREGION_SIZE (8 * 1024)

View file

@ -406,11 +406,11 @@ intel_panel_detect(struct drm_i915_private *dev_priv)
* Return @source_val in range [@source_min..@source_max] scaled to range
* [@target_min..@target_max].
*/
static uint32_t scale(uint32_t source_val,
uint32_t source_min, uint32_t source_max,
uint32_t target_min, uint32_t target_max)
static u32 scale(u32 source_val,
u32 source_min, u32 source_max,
u32 target_min, u32 target_max)
{
uint64_t target_val;
u64 target_val;
WARN_ON(source_min > source_max);
WARN_ON(target_min > target_max);

View file

@ -671,21 +671,7 @@ void intel_psr_enable(struct intel_dp *intel_dp,
dev_priv->psr.enable_source(intel_dp, crtc_state);
dev_priv->psr.enabled = intel_dp;
if (INTEL_GEN(dev_priv) >= 9) {
intel_psr_activate(intel_dp);
} else {
/*
* FIXME: Activation should happen immediately since this
* function is just called after pipe is fully trained and
* enabled.
* However on some platforms we face issues when first
* activation follows a modeset so quickly.
* - On HSW/BDW we get a recoverable frozen screen until
* next exit-activate sequence.
*/
schedule_delayed_work(&dev_priv->psr.work,
msecs_to_jiffies(intel_dp->panel_power_cycle_delay * 5));
}
intel_psr_activate(intel_dp);
unlock:
mutex_unlock(&dev_priv->psr.lock);
@ -768,8 +754,7 @@ void intel_psr_disable(struct intel_dp *intel_dp,
dev_priv->psr.enabled = NULL;
mutex_unlock(&dev_priv->psr.lock);
cancel_delayed_work_sync(&dev_priv->psr.work);
cancel_work_sync(&dev_priv->psr.work);
}
static bool psr_wait_for_idle(struct drm_i915_private *dev_priv)
@ -805,10 +790,13 @@ static bool psr_wait_for_idle(struct drm_i915_private *dev_priv)
static void intel_psr_work(struct work_struct *work)
{
struct drm_i915_private *dev_priv =
container_of(work, typeof(*dev_priv), psr.work.work);
container_of(work, typeof(*dev_priv), psr.work);
mutex_lock(&dev_priv->psr.lock);
if (!dev_priv->psr.enabled)
goto unlock;
/*
* We have to make sure PSR is ready for re-enable
* otherwise it keeps disabled until next full enable/disable cycle.
@ -949,9 +937,7 @@ void intel_psr_flush(struct drm_i915_private *dev_priv,
}
if (!dev_priv->psr.active && !dev_priv->psr.busy_frontbuffer_bits)
if (!work_busy(&dev_priv->psr.work.work))
schedule_delayed_work(&dev_priv->psr.work,
msecs_to_jiffies(100));
schedule_work(&dev_priv->psr.work);
mutex_unlock(&dev_priv->psr.lock);
}
@ -998,7 +984,7 @@ void intel_psr_init(struct drm_i915_private *dev_priv)
dev_priv->psr.link_standby = false;
}
INIT_DELAYED_WORK(&dev_priv->psr.work, intel_psr_work);
INIT_WORK(&dev_priv->psr.work, intel_psr_work);
mutex_init(&dev_priv->psr.lock);
dev_priv->psr.enable_source = hsw_psr_enable_source;

View file

@ -496,6 +496,10 @@ static int init_ring_common(struct intel_engine_cs *engine)
DRM_DEBUG_DRIVER("%s initialization failed [head=%08x], fudging\n",
engine->name, I915_READ_HEAD(engine));
/* Check that the ring offsets point within the ring! */
GEM_BUG_ON(!intel_ring_offset_valid(ring, ring->head));
GEM_BUG_ON(!intel_ring_offset_valid(ring, ring->tail));
intel_ring_update_space(ring);
I915_WRITE_HEAD(engine, ring->head);
I915_WRITE_TAIL(engine, ring->tail);
@ -541,19 +545,23 @@ static struct i915_request *reset_prepare(struct intel_engine_cs *engine)
return i915_gem_find_active_request(engine);
}
static void reset_ring(struct intel_engine_cs *engine,
struct i915_request *request)
static void skip_request(struct i915_request *rq)
{
GEM_TRACE("%s seqno=%x\n",
engine->name, request ? request->global_seqno : 0);
void *vaddr = rq->ring->vaddr;
u32 head;
/*
* RC6 must be prevented until the reset is complete and the engine
* reinitialised. If it occurs in the middle of this sequence, the
* state written to/loaded from the power context is ill-defined (e.g.
* the PP_BASE_DIR may be lost).
*/
assert_forcewakes_active(engine->i915, FORCEWAKE_ALL);
head = rq->infix;
if (rq->postfix < head) {
memset32(vaddr + head, MI_NOOP,
(rq->ring->size - head) / sizeof(u32));
head = 0;
}
memset32(vaddr + head, MI_NOOP, (rq->postfix - head) / sizeof(u32));
}
static void reset_ring(struct intel_engine_cs *engine, struct i915_request *rq)
{
GEM_TRACE("%s seqno=%x\n", engine->name, rq ? rq->global_seqno : 0);
/*
* Try to restore the logical GPU state to match the continuation
@ -569,43 +577,11 @@ static void reset_ring(struct intel_engine_cs *engine,
* If the request was innocent, we try to replay the request with
* the restored context.
*/
if (request) {
struct drm_i915_private *dev_priv = request->i915;
struct intel_context *ce = request->hw_context;
struct i915_hw_ppgtt *ppgtt;
if (ce->state) {
I915_WRITE(CCID,
i915_ggtt_offset(ce->state) |
BIT(8) /* must be set! */ |
CCID_EXTENDED_STATE_SAVE |
CCID_EXTENDED_STATE_RESTORE |
CCID_EN);
}
ppgtt = request->gem_context->ppgtt ?: engine->i915->mm.aliasing_ppgtt;
if (ppgtt) {
u32 pd_offset = ppgtt->pd.base.ggtt_offset << 10;
I915_WRITE(RING_PP_DIR_DCLV(engine), PP_DIR_DCLV_2G);
I915_WRITE(RING_PP_DIR_BASE(engine), pd_offset);
/* Wait for the PD reload to complete */
if (intel_wait_for_register(dev_priv,
RING_PP_DIR_BASE(engine),
BIT(0), 0,
10))
DRM_ERROR("Wait for reload of ppgtt page-directory timed out\n");
ppgtt->pd_dirty_rings &= ~intel_engine_flag(engine);
}
if (rq) {
/* If the rq hung, jump to its breadcrumb and skip the batch */
if (request->fence.error == -EIO)
request->ring->head = request->postfix;
} else {
engine->legacy_active_context = NULL;
engine->legacy_active_ppgtt = NULL;
rq->ring->head = intel_ring_wrap(rq->ring, rq->head);
if (rq->fence.error == -EIO)
skip_request(rq);
}
}
@ -1084,6 +1060,8 @@ int intel_ring_pin(struct intel_ring *ring,
void intel_ring_reset(struct intel_ring *ring, u32 tail)
{
GEM_BUG_ON(!intel_ring_offset_valid(ring, tail));
ring->tail = tail;
ring->head = tail;
ring->emit = tail;
@ -1195,6 +1173,27 @@ static void intel_ring_context_destroy(struct intel_context *ce)
__i915_gem_object_release_unless_active(ce->state->obj);
}
static int __context_pin_ppgtt(struct i915_gem_context *ctx)
{
struct i915_hw_ppgtt *ppgtt;
int err = 0;
ppgtt = ctx->ppgtt ?: ctx->i915->mm.aliasing_ppgtt;
if (ppgtt)
err = gen6_ppgtt_pin(ppgtt);
return err;
}
static void __context_unpin_ppgtt(struct i915_gem_context *ctx)
{
struct i915_hw_ppgtt *ppgtt;
ppgtt = ctx->ppgtt ?: ctx->i915->mm.aliasing_ppgtt;
if (ppgtt)
gen6_ppgtt_unpin(ppgtt);
}
static int __context_pin(struct intel_context *ce)
{
struct i915_vma *vma;
@ -1243,6 +1242,7 @@ static void __context_unpin(struct intel_context *ce)
static void intel_ring_context_unpin(struct intel_context *ce)
{
__context_unpin_ppgtt(ce->gem_context);
__context_unpin(ce);
i915_gem_context_put(ce->gem_context);
@ -1340,6 +1340,10 @@ __ring_context_pin(struct intel_engine_cs *engine,
if (err)
goto err;
err = __context_pin_ppgtt(ce->gem_context);
if (err)
goto err_unpin;
i915_gem_context_get(ctx);
/* One ringbuffer to rule them all */
@ -1348,6 +1352,8 @@ __ring_context_pin(struct intel_engine_cs *engine,
return ce;
err_unpin:
__context_unpin(ce);
err:
ce->pin_count = 0;
return ERR_PTR(err);
@ -1377,8 +1383,9 @@ intel_ring_context_pin(struct intel_engine_cs *engine,
static int intel_init_ring_buffer(struct intel_engine_cs *engine)
{
struct intel_ring *ring;
struct i915_timeline *timeline;
struct intel_ring *ring;
unsigned int size;
int err;
intel_engine_setup_common(engine);
@ -1404,12 +1411,21 @@ static int intel_init_ring_buffer(struct intel_engine_cs *engine)
GEM_BUG_ON(engine->buffer);
engine->buffer = ring;
err = intel_engine_init_common(engine);
size = PAGE_SIZE;
if (HAS_BROKEN_CS_TLB(engine->i915))
size = I830_WA_SIZE;
err = intel_engine_create_scratch(engine, size);
if (err)
goto err_unpin;
err = intel_engine_init_common(engine);
if (err)
goto err_scratch;
return 0;
err_scratch:
intel_engine_cleanup_scratch(engine);
err_unpin:
intel_ring_unpin(ring);
err_ring:
@ -1448,6 +1464,48 @@ void intel_legacy_submission_resume(struct drm_i915_private *dev_priv)
intel_ring_reset(engine->buffer, 0);
}
static int load_pd_dir(struct i915_request *rq,
const struct i915_hw_ppgtt *ppgtt)
{
const struct intel_engine_cs * const engine = rq->engine;
u32 *cs;
cs = intel_ring_begin(rq, 6);
if (IS_ERR(cs))
return PTR_ERR(cs);
*cs++ = MI_LOAD_REGISTER_IMM(1);
*cs++ = i915_mmio_reg_offset(RING_PP_DIR_DCLV(engine));
*cs++ = PP_DIR_DCLV_2G;
*cs++ = MI_LOAD_REGISTER_IMM(1);
*cs++ = i915_mmio_reg_offset(RING_PP_DIR_BASE(engine));
*cs++ = ppgtt->pd.base.ggtt_offset << 10;
intel_ring_advance(rq, cs);
return 0;
}
static int flush_pd_dir(struct i915_request *rq)
{
const struct intel_engine_cs * const engine = rq->engine;
u32 *cs;
cs = intel_ring_begin(rq, 4);
if (IS_ERR(cs))
return PTR_ERR(cs);
/* Stall until the page table load is complete */
*cs++ = MI_STORE_REGISTER_MEM | MI_SRM_LRM_GLOBAL_GTT;
*cs++ = i915_mmio_reg_offset(RING_PP_DIR_BASE(engine));
*cs++ = i915_ggtt_offset(engine->scratch);
*cs++ = MI_NOOP;
intel_ring_advance(rq, cs);
return 0;
}
static inline int mi_set_context(struct i915_request *rq, u32 flags)
{
struct drm_i915_private *i915 = rq->i915;
@ -1458,6 +1516,7 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags)
(HAS_LEGACY_SEMAPHORES(i915) && IS_GEN7(i915)) ?
INTEL_INFO(i915)->num_rings - 1 :
0;
bool force_restore = false;
int len;
u32 *cs;
@ -1471,6 +1530,12 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags)
len = 4;
if (IS_GEN7(i915))
len += 2 + (num_rings ? 4*num_rings + 6 : 0);
if (flags & MI_FORCE_RESTORE) {
GEM_BUG_ON(flags & MI_RESTORE_INHIBIT);
flags &= ~MI_FORCE_RESTORE;
force_restore = true;
len += 2;
}
cs = intel_ring_begin(rq, len);
if (IS_ERR(cs))
@ -1495,6 +1560,26 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags)
}
}
if (force_restore) {
/*
* The HW doesn't handle being told to restore the current
* context very well. Quite often it likes goes to go off and
* sulk, especially when it is meant to be reloading PP_DIR.
* A very simple fix to force the reload is to simply switch
* away from the current context and back again.
*
* Note that the kernel_context will contain random state
* following the INHIBIT_RESTORE. We accept this since we
* never use the kernel_context state; it is merely a
* placeholder we use to flush other contexts.
*/
*cs++ = MI_SET_CONTEXT;
*cs++ = i915_ggtt_offset(to_intel_context(i915->kernel_context,
engine)->state) |
MI_MM_SPACE_GTT |
MI_RESTORE_INHIBIT;
}
*cs++ = MI_NOOP;
*cs++ = MI_SET_CONTEXT;
*cs++ = i915_ggtt_offset(rq->hw_context->state) | flags;
@ -1565,31 +1650,28 @@ static int remap_l3(struct i915_request *rq, int slice)
static int switch_context(struct i915_request *rq)
{
struct intel_engine_cs *engine = rq->engine;
struct i915_gem_context *to_ctx = rq->gem_context;
struct i915_hw_ppgtt *to_mm =
to_ctx->ppgtt ?: rq->i915->mm.aliasing_ppgtt;
struct i915_gem_context *from_ctx = engine->legacy_active_context;
struct i915_hw_ppgtt *from_mm = engine->legacy_active_ppgtt;
struct i915_gem_context *ctx = rq->gem_context;
struct i915_hw_ppgtt *ppgtt = ctx->ppgtt ?: rq->i915->mm.aliasing_ppgtt;
unsigned int unwind_mm = 0;
u32 hw_flags = 0;
int ret, i;
lockdep_assert_held(&rq->i915->drm.struct_mutex);
GEM_BUG_ON(HAS_EXECLISTS(rq->i915));
if (to_mm != from_mm ||
(to_mm && intel_engine_flag(engine) & to_mm->pd_dirty_rings)) {
trace_switch_mm(engine, to_ctx);
ret = to_mm->switch_mm(to_mm, rq);
if (ppgtt) {
ret = load_pd_dir(rq, ppgtt);
if (ret)
goto err;
to_mm->pd_dirty_rings &= ~intel_engine_flag(engine);
engine->legacy_active_ppgtt = to_mm;
hw_flags = MI_FORCE_RESTORE;
if (intel_engine_flag(engine) & ppgtt->pd_dirty_rings) {
unwind_mm = intel_engine_flag(engine);
ppgtt->pd_dirty_rings &= ~unwind_mm;
hw_flags = MI_FORCE_RESTORE;
}
}
if (rq->hw_context->state &&
(to_ctx != from_ctx || hw_flags & MI_FORCE_RESTORE)) {
if (rq->hw_context->state) {
GEM_BUG_ON(engine->id != RCS);
/*
@ -1599,35 +1681,38 @@ static int switch_context(struct i915_request *rq)
* as nothing actually executes using the kernel context; it
* is purely used for flushing user contexts.
*/
if (i915_gem_context_is_kernel(to_ctx))
if (i915_gem_context_is_kernel(ctx))
hw_flags = MI_RESTORE_INHIBIT;
ret = mi_set_context(rq, hw_flags);
if (ret)
goto err_mm;
engine->legacy_active_context = to_ctx;
}
if (to_ctx->remap_slice) {
if (ppgtt) {
ret = flush_pd_dir(rq);
if (ret)
goto err_mm;
}
if (ctx->remap_slice) {
for (i = 0; i < MAX_L3_SLICES; i++) {
if (!(to_ctx->remap_slice & BIT(i)))
if (!(ctx->remap_slice & BIT(i)))
continue;
ret = remap_l3(rq, i);
if (ret)
goto err_ctx;
goto err_mm;
}
to_ctx->remap_slice = 0;
ctx->remap_slice = 0;
}
return 0;
err_ctx:
engine->legacy_active_context = from_ctx;
err_mm:
engine->legacy_active_ppgtt = from_mm;
if (unwind_mm)
ppgtt->pd_dirty_rings |= unwind_mm;
err:
return ret;
}
@ -2130,16 +2215,6 @@ int intel_init_render_ring_buffer(struct intel_engine_cs *engine)
if (ret)
return ret;
if (INTEL_GEN(dev_priv) >= 6) {
ret = intel_engine_create_scratch(engine, PAGE_SIZE);
if (ret)
return ret;
} else if (HAS_BROKEN_CS_TLB(dev_priv)) {
ret = intel_engine_create_scratch(engine, I830_WA_SIZE);
if (ret)
return ret;
}
return 0;
}

View file

@ -122,7 +122,8 @@ struct intel_engine_hangcheck {
int deadlock;
struct intel_instdone instdone;
struct i915_request *active_request;
bool stalled;
bool stalled:1;
bool wedged:1;
};
struct intel_ring {
@ -557,15 +558,6 @@ struct intel_engine_cs {
*/
struct intel_context *last_retired_context;
/* We track the current MI_SET_CONTEXT in order to eliminate
* redudant context switches. This presumes that requests are not
* reordered! Or when they are the tracking is updated along with
* the emission of individual requests into the legacy command
* stream (ring).
*/
struct i915_gem_context *legacy_active_context;
struct i915_hw_ppgtt *legacy_active_ppgtt;
/* status_notifier: list of callbacks for context-switch changes */
struct atomic_notifier_head context_status_notifier;
@ -814,6 +806,19 @@ static inline u32 intel_ring_wrap(const struct intel_ring *ring, u32 pos)
return pos & (ring->size - 1);
}
static inline bool
intel_ring_offset_valid(const struct intel_ring *ring,
unsigned int pos)
{
if (pos & -ring->size) /* must be strictly within the ring */
return false;
if (!IS_ALIGNED(pos, 8)) /* must be qword aligned */
return false;
return true;
}
static inline u32 intel_ring_offset(const struct i915_request *rq, void *addr)
{
/* Don't write ring->size (equivalent to 0) as that hangs some GPUs. */
@ -825,12 +830,7 @@ static inline u32 intel_ring_offset(const struct i915_request *rq, void *addr)
static inline void
assert_ring_tail_valid(const struct intel_ring *ring, unsigned int tail)
{
/* We could combine these into a single tail operation, but keeping
* them as seperate tests will help identify the cause should one
* ever fire.
*/
GEM_BUG_ON(!IS_ALIGNED(tail, 8));
GEM_BUG_ON(tail >= ring->size);
GEM_BUG_ON(!intel_ring_offset_valid(ring, tail));
/*
* "Ring Buffer Use"
@ -870,9 +870,12 @@ void intel_engine_init_global_seqno(struct intel_engine_cs *engine, u32 seqno);
void intel_engine_setup_common(struct intel_engine_cs *engine);
int intel_engine_init_common(struct intel_engine_cs *engine);
int intel_engine_create_scratch(struct intel_engine_cs *engine, int size);
void intel_engine_cleanup_common(struct intel_engine_cs *engine);
int intel_engine_create_scratch(struct intel_engine_cs *engine,
unsigned int size);
void intel_engine_cleanup_scratch(struct intel_engine_cs *engine);
int intel_init_render_ring_buffer(struct intel_engine_cs *engine);
int intel_init_bsd_ring_buffer(struct intel_engine_cs *engine);
int intel_init_blt_ring_buffer(struct intel_engine_cs *engine);
@ -1049,6 +1052,8 @@ gen8_emit_ggtt_write(u32 *cs, u32 value, u32 gtt_offset)
return cs;
}
void intel_engines_sanitize(struct drm_i915_private *i915);
bool intel_engine_is_idle(struct intel_engine_cs *engine);
bool intel_engines_are_idle(struct drm_i915_private *dev_priv);

View file

@ -128,6 +128,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
return "AUX_C";
case POWER_DOMAIN_AUX_D:
return "AUX_D";
case POWER_DOMAIN_AUX_E:
return "AUX_E";
case POWER_DOMAIN_AUX_F:
return "AUX_F";
case POWER_DOMAIN_AUX_IO_A:

View file

@ -1160,6 +1160,9 @@ static bool intel_sdvo_compute_config(struct intel_encoder *encoder,
adjusted_mode);
}
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
return false;
/*
* Make the CRTC code factor in the SDVO pixel multiplier. The
* SDVO device will factor out the multiplier during mode_set.
@ -1631,6 +1634,9 @@ intel_sdvo_mode_valid(struct drm_connector *connector,
struct intel_sdvo *intel_sdvo = intel_attached_sdvo(connector);
int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
return MODE_NO_DBLESCAN;
if (intel_sdvo->pixel_clock_min > mode->clock)
return MODE_CLOCK_LOW;

View file

@ -1101,6 +1101,37 @@ intel_check_sprite_plane(struct intel_plane *plane,
return 0;
}
static bool has_dst_key_in_primary_plane(struct drm_i915_private *dev_priv)
{
return INTEL_GEN(dev_priv) >= 9;
}
static void intel_plane_set_ckey(struct intel_plane_state *plane_state,
const struct drm_intel_sprite_colorkey *set)
{
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
struct drm_intel_sprite_colorkey *key = &plane_state->ckey;
*key = *set;
/*
* We want src key enabled on the
* sprite and not on the primary.
*/
if (plane->id == PLANE_PRIMARY &&
set->flags & I915_SET_COLORKEY_SOURCE)
key->flags = 0;
/*
* On SKL+ we want dst key enabled on
* the primary and not on the sprite.
*/
if (INTEL_GEN(dev_priv) >= 9 && plane->id != PLANE_PRIMARY &&
set->flags & I915_SET_COLORKEY_DESTINATION)
key->flags = 0;
}
int intel_sprite_set_colorkey_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
@ -1130,6 +1161,16 @@ int intel_sprite_set_colorkey_ioctl(struct drm_device *dev, void *data,
if (!plane || plane->type != DRM_PLANE_TYPE_OVERLAY)
return -ENOENT;
/*
* SKL+ only plane 2 can do destination keying against plane 1.
* Also multiple planes can't do destination keying on the same
* pipe simultaneously.
*/
if (INTEL_GEN(dev_priv) >= 9 &&
to_intel_plane(plane)->id >= PLANE_SPRITE1 &&
set->flags & I915_SET_COLORKEY_DESTINATION)
return -EINVAL;
drm_modeset_acquire_init(&ctx, 0);
state = drm_atomic_state_alloc(plane->dev);
@ -1142,11 +1183,28 @@ int intel_sprite_set_colorkey_ioctl(struct drm_device *dev, void *data,
while (1) {
plane_state = drm_atomic_get_plane_state(state, plane);
ret = PTR_ERR_OR_ZERO(plane_state);
if (!ret) {
to_intel_plane_state(plane_state)->ckey = *set;
ret = drm_atomic_commit(state);
if (!ret)
intel_plane_set_ckey(to_intel_plane_state(plane_state), set);
/*
* On some platforms we have to configure
* the dst colorkey on the primary plane.
*/
if (!ret && has_dst_key_in_primary_plane(dev_priv)) {
struct intel_crtc *crtc =
intel_get_crtc_for_pipe(dev_priv,
to_intel_plane(plane)->pipe);
plane_state = drm_atomic_get_plane_state(state,
crtc->base.primary);
ret = PTR_ERR_OR_ZERO(plane_state);
if (!ret)
intel_plane_set_ckey(to_intel_plane_state(plane_state), set);
}
if (!ret)
ret = drm_atomic_commit(state);
if (ret != -EDEADLK)
break;

View file

@ -846,6 +846,9 @@ intel_tv_mode_valid(struct drm_connector *connector,
const struct tv_mode *tv_mode = intel_tv_mode_find(connector->state);
int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
return MODE_NO_DBLESCAN;
if (mode->clock > max_dotclk)
return MODE_CLOCK_HIGH;
@ -873,16 +876,21 @@ intel_tv_compute_config(struct intel_encoder *encoder,
struct drm_connector_state *conn_state)
{
const struct tv_mode *tv_mode = intel_tv_mode_find(conn_state);
struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
if (!tv_mode)
return false;
pipe_config->base.adjusted_mode.crtc_clock = tv_mode->clock;
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
return false;
adjusted_mode->crtc_clock = tv_mode->clock;
DRM_DEBUG_KMS("forcing bpc to 8 for TV\n");
pipe_config->pipe_bpp = 8*3;
/* TV has it's own notion of sync and other mode flags, so clear them. */
pipe_config->base.adjusted_mode.flags = 0;
adjusted_mode->flags = 0;
/*
* FIXME: We don't check whether the input mode is actually what we want

View file

@ -207,7 +207,7 @@ void intel_uc_init_mmio(struct drm_i915_private *i915)
static void guc_capture_load_err_log(struct intel_guc *guc)
{
if (!guc->log.vma || !i915_modparams.guc_log_level)
if (!guc->log.vma || !intel_guc_log_get_level(&guc->log))
return;
if (!guc->load_err_log)

View file

@ -2093,21 +2093,25 @@ static int gen8_reset_engines(struct drm_i915_private *dev_priv,
{
struct intel_engine_cs *engine;
unsigned int tmp;
int ret;
for_each_engine_masked(engine, dev_priv, engine_mask, tmp)
if (gen8_reset_engine_start(engine))
for_each_engine_masked(engine, dev_priv, engine_mask, tmp) {
if (gen8_reset_engine_start(engine)) {
ret = -EIO;
goto not_ready;
}
}
if (INTEL_GEN(dev_priv) >= 11)
return gen11_reset_engines(dev_priv, engine_mask);
ret = gen11_reset_engines(dev_priv, engine_mask);
else
return gen6_reset_engines(dev_priv, engine_mask);
ret = gen6_reset_engines(dev_priv, engine_mask);
not_ready:
for_each_engine_masked(engine, dev_priv, engine_mask, tmp)
gen8_reset_engine_cancel(engine);
return -EIO;
return ret;
}
typedef int (*reset_func)(struct drm_i915_private *, unsigned engine_mask);
@ -2170,6 +2174,8 @@ int intel_gpu_reset(struct drm_i915_private *dev_priv, unsigned engine_mask)
* Thus assume it is best to stop engines on all gens
* where we have a gpu reset.
*
* WaKBLVECSSemaphoreWaitPoll:kbl (on ALL_ENGINES)
*
* WaMediaResetMainRingCleanup:ctg,elk (presumably)
*
* FIXME: Wa for more modern gens needs to be validated

View file

@ -67,21 +67,21 @@ struct intel_uncore_funcs {
void (*force_wake_put)(struct drm_i915_private *dev_priv,
enum forcewake_domains domains);
uint8_t (*mmio_readb)(struct drm_i915_private *dev_priv,
i915_reg_t r, bool trace);
uint16_t (*mmio_readw)(struct drm_i915_private *dev_priv,
i915_reg_t r, bool trace);
uint32_t (*mmio_readl)(struct drm_i915_private *dev_priv,
i915_reg_t r, bool trace);
uint64_t (*mmio_readq)(struct drm_i915_private *dev_priv,
i915_reg_t r, bool trace);
u8 (*mmio_readb)(struct drm_i915_private *dev_priv,
i915_reg_t r, bool trace);
u16 (*mmio_readw)(struct drm_i915_private *dev_priv,
i915_reg_t r, bool trace);
u32 (*mmio_readl)(struct drm_i915_private *dev_priv,
i915_reg_t r, bool trace);
u64 (*mmio_readq)(struct drm_i915_private *dev_priv,
i915_reg_t r, bool trace);
void (*mmio_writeb)(struct drm_i915_private *dev_priv,
i915_reg_t r, uint8_t val, bool trace);
i915_reg_t r, u8 val, bool trace);
void (*mmio_writew)(struct drm_i915_private *dev_priv,
i915_reg_t r, uint16_t val, bool trace);
i915_reg_t r, u16 val, bool trace);
void (*mmio_writel)(struct drm_i915_private *dev_priv,
i915_reg_t r, uint32_t val, bool trace);
i915_reg_t r, u32 val, bool trace);
};
struct intel_forcewake_range {

View file

@ -420,7 +420,9 @@ struct child_device_config {
u16 extended_type;
u8 dvo_function;
u8 dp_usb_type_c:1; /* 195 */
u8 flags2_reserved:7; /* 195 */
u8 tbt:1; /* 209 */
u8 flags2_reserved:2; /* 195 */
u8 dp_port_trace_length:4; /* 209 */
u8 dp_gpio_index; /* 195 */
u16 dp_gpio_pin_num; /* 195 */
u8 dp_iboost_level:4; /* 196 */
@ -454,7 +456,7 @@ struct bdb_general_definitions {
* number = (block_size - sizeof(bdb_general_definitions))/
* defs->child_dev_size;
*/
uint8_t devices[0];
u8 devices[0];
} __packed;
/* Mask for DRRS / Panel Channel / SSC / BLT control bits extraction */

View file

@ -48,29 +48,58 @@
* - Public functions to init or apply the given workaround type.
*/
static int wa_add(struct drm_i915_private *dev_priv,
i915_reg_t addr,
const u32 mask, const u32 val)
static void wa_add(struct drm_i915_private *i915,
i915_reg_t reg, const u32 mask, const u32 val)
{
const unsigned int idx = dev_priv->workarounds.count;
struct i915_workarounds *wa = &i915->workarounds;
unsigned int start = 0, end = wa->count;
unsigned int addr = i915_mmio_reg_offset(reg);
struct i915_wa_reg *r;
if (WARN_ON(idx >= I915_MAX_WA_REGS))
return -ENOSPC;
while (start < end) {
unsigned int mid = start + (end - start) / 2;
dev_priv->workarounds.reg[idx].addr = addr;
dev_priv->workarounds.reg[idx].value = val;
dev_priv->workarounds.reg[idx].mask = mask;
if (wa->reg[mid].addr < addr) {
start = mid + 1;
} else if (wa->reg[mid].addr > addr) {
end = mid;
} else {
r = &wa->reg[mid];
dev_priv->workarounds.count++;
if ((mask & ~r->mask) == 0) {
DRM_ERROR("Discarding overwritten w/a for reg %04x (mask: %08x, value: %08x)\n",
addr, r->mask, r->value);
return 0;
r->value &= ~mask;
}
r->value |= val;
r->mask |= mask;
return;
}
}
if (WARN_ON_ONCE(wa->count >= I915_MAX_WA_REGS)) {
DRM_ERROR("Dropping w/a for reg %04x (mask: %08x, value: %08x)\n",
addr, mask, val);
return;
}
r = &wa->reg[wa->count++];
r->addr = addr;
r->value = val;
r->mask = mask;
while (r-- > wa->reg) {
GEM_BUG_ON(r[0].addr == r[1].addr);
if (r[1].addr > r[0].addr)
break;
swap(r[1], r[0]);
}
}
#define WA_REG(addr, mask, val) do { \
const int r = wa_add(dev_priv, (addr), (mask), (val)); \
if (r) \
return r; \
} while (0)
#define WA_REG(addr, mask, val) wa_add(dev_priv, (addr), (mask), (val))
#define WA_SET_BIT_MASKED(addr, mask) \
WA_REG(addr, (mask), _MASKED_BIT_ENABLE(mask))
@ -540,7 +569,7 @@ int intel_ctx_workarounds_emit(struct i915_request *rq)
*cs++ = MI_LOAD_REGISTER_IMM(w->count);
for (i = 0; i < w->count; i++) {
*cs++ = i915_mmio_reg_offset(w->reg[i].addr);
*cs++ = w->reg[i].addr;
*cs++ = w->reg[i].value;
}
*cs++ = MI_NOOP;
@ -666,6 +695,19 @@ static void kbl_gt_workarounds_apply(struct drm_i915_private *dev_priv)
I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA,
I915_READ(GEN9_GAMT_ECO_REG_RW_IA) |
GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS);
/* WaKBLVECSSemaphoreWaitPoll:kbl */
if (IS_KBL_REVID(dev_priv, KBL_REVID_A0, KBL_REVID_E0)) {
struct intel_engine_cs *engine;
unsigned int tmp;
for_each_engine(engine, dev_priv, tmp) {
if (engine->id == RCS)
continue;
I915_WRITE(RING_SEMA_WAIT_POLL(engine->mmio_base), 1);
}
}
}
static void glk_gt_workarounds_apply(struct drm_i915_private *dev_priv)

View file

@ -1003,7 +1003,7 @@ static int gpu_write(struct i915_vma *vma,
reservation_object_unlock(vma->resv);
err_request:
__i915_request_add(rq, err == 0);
i915_request_add(rq);
return err;
}

View file

@ -199,7 +199,7 @@ static int gpu_set(struct drm_i915_gem_object *obj,
cs = intel_ring_begin(rq, 4);
if (IS_ERR(cs)) {
__i915_request_add(rq, false);
i915_request_add(rq);
i915_vma_unpin(vma);
return PTR_ERR(cs);
}
@ -229,7 +229,7 @@ static int gpu_set(struct drm_i915_gem_object *obj,
reservation_object_add_excl_fence(obj->resv, &rq->fence);
reservation_object_unlock(obj->resv);
__i915_request_add(rq, true);
i915_request_add(rq);
return 0;
}

View file

@ -182,12 +182,12 @@ static int gpu_fill(struct drm_i915_gem_object *obj,
reservation_object_add_excl_fence(obj->resv, &rq->fence);
reservation_object_unlock(obj->resv);
__i915_request_add(rq, true);
i915_request_add(rq);
return 0;
err_request:
__i915_request_add(rq, false);
i915_request_add(rq);
err_batch:
i915_vma_unpin(batch);
err_vma:
@ -519,8 +519,8 @@ static int igt_switch_to_kernel_context(void *arg)
mutex_lock(&i915->drm.struct_mutex);
ctx = kernel_context(i915);
if (IS_ERR(ctx)) {
err = PTR_ERR(ctx);
goto out_unlock;
mutex_unlock(&i915->drm.struct_mutex);
return PTR_ERR(ctx);
}
/* First check idling each individual engine */

View file

@ -135,21 +135,19 @@ static int igt_ppgtt_alloc(void *arg)
struct drm_i915_private *dev_priv = arg;
struct i915_hw_ppgtt *ppgtt;
u64 size, last;
int err;
int err = 0;
/* Allocate a ppggt and try to fill the entire range */
if (!USES_PPGTT(dev_priv))
return 0;
ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL);
if (!ppgtt)
return -ENOMEM;
mutex_lock(&dev_priv->drm.struct_mutex);
err = __hw_ppgtt_init(ppgtt, dev_priv);
if (err)
goto err_ppgtt;
ppgtt = __hw_ppgtt_create(dev_priv);
if (IS_ERR(ppgtt)) {
err = PTR_ERR(ppgtt);
goto err_unlock;
}
if (!ppgtt->vm.allocate_va_range)
goto err_ppgtt_cleanup;
@ -189,9 +187,9 @@ static int igt_ppgtt_alloc(void *arg)
err_ppgtt_cleanup:
ppgtt->vm.cleanup(&ppgtt->vm);
err_ppgtt:
mutex_unlock(&dev_priv->drm.struct_mutex);
kfree(ppgtt);
err_unlock:
mutex_unlock(&dev_priv->drm.struct_mutex);
return err;
}

View file

@ -342,9 +342,9 @@ static int live_nop_request(void *arg)
mutex_lock(&i915->drm.struct_mutex);
for_each_engine(engine, i915, id) {
IGT_TIMEOUT(end_time);
struct i915_request *request;
struct i915_request *request = NULL;
unsigned long n, prime;
IGT_TIMEOUT(end_time);
ktime_t times[2] = {};
err = begin_live_test(&t, i915, __func__, engine->name);
@ -466,7 +466,7 @@ empty_request(struct intel_engine_cs *engine,
goto out_request;
out_request:
__i915_request_add(request, err == 0);
i915_request_add(request);
return err ? ERR_PTR(err) : request;
}

View file

@ -245,7 +245,7 @@ hang_create_request(struct hang *h, struct intel_engine_cs *engine)
err = emit_recurse_batch(h, rq);
if (err) {
__i915_request_add(rq, false);
i915_request_add(rq);
return ERR_PTR(err);
}
@ -318,7 +318,7 @@ static int igt_hang_sanitycheck(void *arg)
*h.batch = MI_BATCH_BUFFER_END;
i915_gem_chipset_flush(i915);
__i915_request_add(rq, true);
i915_request_add(rq);
timeout = i915_request_wait(rq,
I915_WAIT_LOCKED,
@ -464,7 +464,7 @@ static int __igt_reset_engine(struct drm_i915_private *i915, bool active)
}
i915_request_get(rq);
__i915_request_add(rq, true);
i915_request_add(rq);
mutex_unlock(&i915->drm.struct_mutex);
if (!wait_until_running(&h, rq)) {
@ -742,7 +742,7 @@ static int __igt_reset_engines(struct drm_i915_private *i915,
}
i915_request_get(rq);
__i915_request_add(rq, true);
i915_request_add(rq);
mutex_unlock(&i915->drm.struct_mutex);
if (!wait_until_running(&h, rq)) {
@ -942,7 +942,7 @@ static int igt_wait_reset(void *arg)
}
i915_request_get(rq);
__i915_request_add(rq, true);
i915_request_add(rq);
if (!wait_until_running(&h, rq)) {
struct drm_printer p = drm_info_printer(i915->drm.dev);
@ -1037,7 +1037,7 @@ static int igt_reset_queue(void *arg)
}
i915_request_get(prev);
__i915_request_add(prev, true);
i915_request_add(prev);
count = 0;
do {
@ -1051,7 +1051,7 @@ static int igt_reset_queue(void *arg)
}
i915_request_get(rq);
__i915_request_add(rq, true);
i915_request_add(rq);
/*
* XXX We don't handle resetting the kernel context
@ -1184,7 +1184,7 @@ static int igt_handle_error(void *arg)
}
i915_request_get(rq);
__i915_request_add(rq, true);
i915_request_add(rq);
if (!wait_until_running(&h, rq)) {
struct drm_printer p = drm_info_printer(i915->drm.dev);

View file

@ -155,7 +155,7 @@ spinner_create_request(struct spinner *spin,
err = emit_recurse_batch(spin, rq, arbitration_command);
if (err) {
__i915_request_add(rq, false);
i915_request_add(rq);
return ERR_PTR(err);
}

View file

@ -75,7 +75,7 @@ read_nonprivs(struct i915_gem_context *ctx, struct intel_engine_cs *engine)
i915_gem_object_get(result);
i915_gem_object_set_active_reference(result);
__i915_request_add(rq, true);
i915_request_add(rq);
i915_vma_unpin(vma);
return result;

View file

@ -80,12 +80,13 @@ mock_ppgtt(struct drm_i915_private *i915,
ppgtt->vm.clear_range = nop_clear_range;
ppgtt->vm.insert_page = mock_insert_page;
ppgtt->vm.insert_entries = mock_insert_entries;
ppgtt->vm.bind_vma = mock_bind_ppgtt;
ppgtt->vm.unbind_vma = mock_unbind_ppgtt;
ppgtt->vm.set_pages = ppgtt_set_pages;
ppgtt->vm.clear_pages = clear_pages;
ppgtt->vm.cleanup = mock_cleanup;
ppgtt->vm.vma_ops.bind_vma = mock_bind_ppgtt;
ppgtt->vm.vma_ops.unbind_vma = mock_unbind_ppgtt;
ppgtt->vm.vma_ops.set_pages = ppgtt_set_pages;
ppgtt->vm.vma_ops.clear_pages = clear_pages;
return ppgtt;
}
@ -116,12 +117,13 @@ void mock_init_ggtt(struct drm_i915_private *i915)
ggtt->vm.clear_range = nop_clear_range;
ggtt->vm.insert_page = mock_insert_page;
ggtt->vm.insert_entries = mock_insert_entries;
ggtt->vm.bind_vma = mock_bind_ggtt;
ggtt->vm.unbind_vma = mock_unbind_ggtt;
ggtt->vm.set_pages = ggtt_set_pages;
ggtt->vm.clear_pages = clear_pages;
ggtt->vm.cleanup = mock_cleanup;
ggtt->vm.vma_ops.bind_vma = mock_bind_ggtt;
ggtt->vm.vma_ops.unbind_vma = mock_unbind_ggtt;
ggtt->vm.vma_ops.set_pages = ggtt_set_pages;
ggtt->vm.vma_ops.clear_pages = clear_pages;
i915_address_space_init(&ggtt->vm, i915, "global");
}

View file

@ -349,7 +349,6 @@
#define INTEL_KBL_GT2_IDS(info) \
INTEL_VGA_DEVICE(0x5916, info), /* ULT GT2 */ \
INTEL_VGA_DEVICE(0x5917, info), /* Mobile GT2 */ \
INTEL_VGA_DEVICE(0x591C, info), /* ULX GT2 */ \
INTEL_VGA_DEVICE(0x5921, info), /* ULT GT2F */ \
INTEL_VGA_DEVICE(0x591E, info), /* ULX GT2 */ \
INTEL_VGA_DEVICE(0x5912, info), /* DT GT2 */ \
@ -365,11 +364,17 @@
#define INTEL_KBL_GT4_IDS(info) \
INTEL_VGA_DEVICE(0x593B, info) /* Halo GT4 */
/* AML/KBL Y GT2 */
#define INTEL_AML_GT2_IDS(info) \
INTEL_VGA_DEVICE(0x591C, info), /* ULX GT2 */ \
INTEL_VGA_DEVICE(0x87C0, info) /* ULX GT2 */
#define INTEL_KBL_IDS(info) \
INTEL_KBL_GT1_IDS(info), \
INTEL_KBL_GT2_IDS(info), \
INTEL_KBL_GT3_IDS(info), \
INTEL_KBL_GT4_IDS(info)
INTEL_KBL_GT4_IDS(info), \
INTEL_AML_GT2_IDS(info)
/* CFL S */
#define INTEL_CFL_S_GT1_IDS(info) \
@ -388,32 +393,40 @@
INTEL_VGA_DEVICE(0x3E9B, info), /* Halo GT2 */ \
INTEL_VGA_DEVICE(0x3E94, info) /* Halo GT2 */
/* CFL U GT1 */
#define INTEL_CFL_U_GT1_IDS(info) \
INTEL_VGA_DEVICE(0x3EA1, info), \
INTEL_VGA_DEVICE(0x3EA4, info)
/* CFL U GT2 */
#define INTEL_CFL_U_GT2_IDS(info) \
INTEL_VGA_DEVICE(0x3EA0, info), \
INTEL_VGA_DEVICE(0x3EA3, info), \
INTEL_VGA_DEVICE(0x3EA9, info)
/* CFL U GT3 */
#define INTEL_CFL_U_GT3_IDS(info) \
INTEL_VGA_DEVICE(0x3EA2, info), /* ULT GT3 */ \
INTEL_VGA_DEVICE(0x3EA5, info), /* ULT GT3 */ \
INTEL_VGA_DEVICE(0x3EA6, info), /* ULT GT3 */ \
INTEL_VGA_DEVICE(0x3EA7, info), /* ULT GT3 */ \
INTEL_VGA_DEVICE(0x3EA8, info) /* ULT GT3 */
/* WHL/CFL U GT1 */
#define INTEL_WHL_U_GT1_IDS(info) \
INTEL_VGA_DEVICE(0x3EA1, info)
/* WHL/CFL U GT2 */
#define INTEL_WHL_U_GT2_IDS(info) \
INTEL_VGA_DEVICE(0x3EA0, info)
/* WHL/CFL U GT3 */
#define INTEL_WHL_U_GT3_IDS(info) \
INTEL_VGA_DEVICE(0x3EA2, info), \
INTEL_VGA_DEVICE(0x3EA3, info), \
INTEL_VGA_DEVICE(0x3EA4, info)
#define INTEL_CFL_IDS(info) \
INTEL_CFL_S_GT1_IDS(info), \
INTEL_CFL_S_GT2_IDS(info), \
INTEL_CFL_H_GT2_IDS(info), \
INTEL_CFL_U_GT1_IDS(info), \
INTEL_CFL_U_GT2_IDS(info), \
INTEL_CFL_U_GT3_IDS(info)
INTEL_CFL_U_GT3_IDS(info), \
INTEL_WHL_U_GT1_IDS(info), \
INTEL_WHL_U_GT2_IDS(info), \
INTEL_WHL_U_GT3_IDS(info)
/* CNL */
#define INTEL_CNL_IDS(info) \