TTY/Serial patches for 4.9-rc1

Here is the big TTY and Serial patch set for 4.9-rc1.
 
 It also includes some drivers/dma/ changes, as those were needed by some
 serial drivers, and they were all acked by the DMA maintainer.  Also in
 here is the long-suffering ACPI SPCR patchset, which was passed around
 from maintainer to maintainer like a hot-potato.  Seems I was the
 sucker^Wlucky one.  All of those patches have been acked by the various
 subsystem maintainers as well.
 
 All of this has been in linux-next with no reported issues.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iFYEABECABYFAlfyNjEPHGdyZWdAa3JvYWguY29tAAoJEDFH1A3bLfspwIcAn2uN
 qCD8xQJ0Cs61hD1nUzhNygG8AJ94I4zz/fPGpyh/CtJfLQwtUdLhNA==
 =Rken
 -----END PGP SIGNATURE-----

Merge tag 'tty-4.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty

Pull tty and serial updates from Greg KH:
 "Here is the big tty and serial patch set for 4.9-rc1.

  It also includes some drivers/dma/ changes, as those were needed by
  some serial drivers, and they were all acked by the DMA maintainer.

  Also in here is the long-suffering ACPI SPCR patchset, which was
  passed around from maintainer to maintainer like a hot-potato. Seems I
  was the sucker^Wlucky one. All of those patches have been acked by the
  various subsystem maintainers as well.

  All of this has been in linux-next with no reported issues"

* tag 'tty-4.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: (111 commits)
  Revert "serial: pl011: add console matching function"
  MAINTAINERS: update entry for atmel_serial driver
  serial: pl011: add console matching function
  ARM64: ACPI: enable ACPI_SPCR_TABLE
  ACPI: parse SPCR and enable matching console
  of/serial: move earlycon early_param handling to serial
  Revert "drivers/tty: Explicitly pass current to show_stack"
  tty: amba-pl011: Don't complain on -EPROBE_DEFER when no irq
  nios2: dts: 10m50: Add tx-threshold parameter
  serial: 8250: Set Altera 16550 TX FIFO Threshold
  serial: 8250: of: Load TX FIFO Threshold from DT
  Documentation: dt: serial: Add TX FIFO threshold parameter
  drivers/tty: Explicitly pass current to show_stack
  serial: imx: Fix DCD reading
  serial: stm32: mark symbols static where possible
  serial: xuartps: Add some register initialisation to cdns_early_console_setup()
  serial: xuartps: Removed unwanted checks while reading the error conditions
  serial: xuartps: Rewrite the interrupt handling logic
  serial: stm32: use mapbase instead of membase for DMA
  tty/serial: atmel: fix fractional baud rate computation
  ...
This commit is contained in:
Linus Torvalds 2016-10-03 20:11:49 -07:00
commit e6dce825fb
69 changed files with 2624 additions and 1283 deletions

View file

@ -42,6 +42,8 @@ Optional properties:
- auto-flow-control: one way to enable automatic flow control support. The
driver is allowed to detect support for the capability even without this
property.
- tx-threshold: Specify the TX FIFO low water indication for parts with
programmable TX FIFO thresholds.
Note:
* fsl,ns16550:

View file

@ -0,0 +1,46 @@
* STMicroelectronics STM32 USART
Required properties:
- compatible: Can be either "st,stm32-usart", "st,stm32-uart",
"st,stm32f7-usart" or "st,stm32f7-uart" depending on whether
the device supports synchronous mode and is compatible with
stm32(f4) or stm32f7.
- reg: The address and length of the peripheral registers space
- interrupts: The interrupt line of the USART instance
- clocks: The input clock of the USART instance
Optional properties:
- pinctrl: The reference on the pins configuration
- st,hw-flow-ctrl: bool flag to enable hardware flow control.
- dmas: phandle(s) to DMA controller node(s). Refer to stm32-dma.txt
- dma-names: "rx" and/or "tx"
Examples:
usart4: serial@40004c00 {
compatible = "st,stm32-uart";
reg = <0x40004c00 0x400>;
interrupts = <52>;
clocks = <&clk_pclk1>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_usart4>;
};
usart2: serial@40004400 {
compatible = "st,stm32-usart", "st,stm32-uart";
reg = <0x40004400 0x400>;
interrupts = <38>;
clocks = <&clk_pclk1>;
st,hw-flow-ctrl;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_usart2 &pinctrl_usart2_rtscts>;
};
usart1: serial@40011000 {
compatible = "st,stm32-usart", "st,stm32-uart";
reg = <0x40011000 0x400>;
interrupts = <37>;
clocks = <&rcc 0 164>;
dmas = <&dma2 2 4 0x414 0x0>,
<&dma2 7 4 0x414 0x0>;
dma-names = "rx", "tx";
};

View file

@ -1054,11 +1054,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
determined by the stdout-path property in device
tree's chosen node.
cdns,<addr>
Start an early, polled-mode console on a cadence serial
port at the specified address. The cadence serial port
must already be setup and configured. Options are not
yet supported.
cdns,<addr>[,options]
Start an early, polled-mode console on a Cadence
(xuartps) serial port at the specified address. Only
supported option is baud rate. If baud rate is not
specified, the serial port must already be setup and
configured.
uart[8250],io,<addr>[,options]
uart[8250],mmio,<addr>[,options]

View file

@ -2121,11 +2121,6 @@ M: Ludovic Desroches <ludovic.desroches@atmel.com>
S: Maintained
F: drivers/mmc/host/atmel-mci.c
ATMEL AT91 / AT32 SERIAL DRIVER
M: Nicolas Ferre <nicolas.ferre@atmel.com>
S: Supported
F: drivers/tty/serial/atmel_serial.c
ATMEL AT91 SAMA5D2-Compatible Shutdown Controller
M: Nicolas Ferre <nicolas.ferre@atmel.com>
S: Supported
@ -7774,6 +7769,12 @@ T: git git://git.monstr.eu/linux-2.6-microblaze.git
S: Supported
F: arch/microblaze/
MICROCHIP / ATMEL AT91 / AT32 SERIAL DRIVER
M: Richard Genoud <richard.genoud@gmail.com>
S: Maintained
F: drivers/tty/serial/atmel_serial.c
F: include/linux/atmel_serial.h
MICROSOFT SURFACE PRO 3 BUTTON DRIVER
M: Chen Yu <yu.c.chen@intel.com>
L: platform-driver-x86@vger.kernel.org

View file

@ -4,6 +4,7 @@ config ARM64
select ACPI_GENERIC_GSI if ACPI
select ACPI_REDUCED_HARDWARE_ONLY if ACPI
select ACPI_MCFG if ACPI
select ACPI_SPCR_TABLE if ACPI
select ARCH_CLOCKSOURCE_DATA
select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI

View file

@ -24,6 +24,7 @@
#include <linux/memblock.h>
#include <linux/of_fdt.h>
#include <linux/smp.h>
#include <linux/serial_core.h>
#include <asm/cputype.h>
#include <asm/cpu_ops.h>
@ -206,7 +207,7 @@ void __init acpi_boot_table_init(void)
if (param_acpi_off ||
(!param_acpi_on && !param_acpi_force &&
of_scan_flat_dt(dt_scan_depth1_nodes, NULL)))
return;
goto done;
/*
* ACPI is disabled at this point. Enable it in order to parse
@ -226,6 +227,14 @@ void __init acpi_boot_table_init(void)
if (!param_acpi_force)
disable_acpi();
}
done:
if (acpi_disabled) {
if (earlycon_init_is_deferred)
early_init_dt_scan_chosen_stdout();
} else {
parse_spcr(earlycon_init_is_deferred);
}
}
#ifdef CONFIG_ACPI_APEI

View file

@ -83,6 +83,7 @@
fifo-size = <32>;
reg-io-width = <4>;
reg-shift = <2>;
tx-threshold = <16>;
};
sysid: sysid@18001528 {

View file

@ -77,6 +77,9 @@ config ACPI_DEBUGGER_USER
endif
config ACPI_SPCR_TABLE
bool
config ACPI_SLEEP
bool
depends on SUSPEND || HIBERNATION

View file

@ -82,6 +82,7 @@ obj-$(CONFIG_ACPI_EC_DEBUGFS) += ec_sys.o
obj-$(CONFIG_ACPI_CUSTOM_METHOD)+= custom_method.o
obj-$(CONFIG_ACPI_BGRT) += bgrt.o
obj-$(CONFIG_ACPI_CPPC_LIB) += cppc_acpi.o
obj-$(CONFIG_ACPI_SPCR_TABLE) += spcr.o
obj-$(CONFIG_ACPI_DEBUGGER_USER) += acpi_dbg.o
# processor has its own "processor." module_param namespace

111
drivers/acpi/spcr.c Normal file
View file

@ -0,0 +1,111 @@
/*
* Copyright (c) 2012, Intel Corporation
* Copyright (c) 2015, Red Hat, Inc.
* Copyright (c) 2015, 2016 Linaro Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#define pr_fmt(fmt) "ACPI: SPCR: " fmt
#include <linux/acpi.h>
#include <linux/console.h>
#include <linux/kernel.h>
#include <linux/serial_core.h>
/**
* parse_spcr() - parse ACPI SPCR table and add preferred console
*
* @earlycon: set up earlycon for the console specified by the table
*
* For the architectures with support for ACPI, CONFIG_ACPI_SPCR_TABLE may be
* defined to parse ACPI SPCR table. As a result of the parsing preferred
* console is registered and if @earlycon is true, earlycon is set up.
*
* When CONFIG_ACPI_SPCR_TABLE is defined, this function should be called
* from arch inintialization code as soon as the DT/ACPI decision is made.
*
*/
int __init parse_spcr(bool earlycon)
{
static char opts[64];
struct acpi_table_spcr *table;
acpi_size table_size;
acpi_status status;
char *uart;
char *iotype;
int baud_rate;
int err;
if (acpi_disabled)
return -ENODEV;
status = acpi_get_table_with_size(ACPI_SIG_SPCR, 0,
(struct acpi_table_header **)&table,
&table_size);
if (ACPI_FAILURE(status))
return -ENOENT;
if (table->header.revision < 2) {
err = -ENOENT;
pr_err("wrong table version\n");
goto done;
}
iotype = table->serial_port.space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY ?
"mmio" : "io";
switch (table->interface_type) {
case ACPI_DBG2_ARM_SBSA_32BIT:
iotype = "mmio32";
/* fall through */
case ACPI_DBG2_ARM_PL011:
case ACPI_DBG2_ARM_SBSA_GENERIC:
case ACPI_DBG2_BCM2835:
uart = "pl011";
break;
case ACPI_DBG2_16550_COMPATIBLE:
case ACPI_DBG2_16550_SUBSET:
uart = "uart";
break;
default:
err = -ENOENT;
goto done;
}
switch (table->baud_rate) {
case 3:
baud_rate = 9600;
break;
case 4:
baud_rate = 19200;
break;
case 6:
baud_rate = 57600;
break;
case 7:
baud_rate = 115200;
break;
default:
err = -ENOENT;
goto done;
}
snprintf(opts, sizeof(opts), "%s,%s,0x%llx,%d", uart, iotype,
table->serial_port.address, baud_rate);
pr_info("console: %s\n", opts);
if (earlycon)
setup_earlycon(opts);
err = add_preferred_console(uart, 0, opts + strlen(uart) + 1);
done:
early_acpi_os_unmap_memory((void __iomem *)table, table_size);
return err;
}

View file

@ -46,9 +46,9 @@
u8 _dmsize = _is_slave ? _sconfig->dst_maxburst : \
DW_DMA_MSIZE_16; \
u8 _dms = (_dwc->direction == DMA_MEM_TO_DEV) ? \
_dwc->p_master : _dwc->m_master; \
_dwc->dws.p_master : _dwc->dws.m_master; \
u8 _sms = (_dwc->direction == DMA_DEV_TO_MEM) ? \
_dwc->p_master : _dwc->m_master; \
_dwc->dws.p_master : _dwc->dws.m_master; \
\
(DWC_CTLL_DST_MSIZE(_dmsize) \
| DWC_CTLL_SRC_MSIZE(_smsize) \
@ -143,12 +143,16 @@ static void dwc_initialize(struct dw_dma_chan *dwc)
struct dw_dma *dw = to_dw_dma(dwc->chan.device);
u32 cfghi = DWC_CFGH_FIFO_MODE;
u32 cfglo = DWC_CFGL_CH_PRIOR(dwc->priority);
bool hs_polarity = dwc->dws.hs_polarity;
if (test_bit(DW_DMA_IS_INITIALIZED, &dwc->flags))
return;
cfghi |= DWC_CFGH_DST_PER(dwc->dst_id);
cfghi |= DWC_CFGH_SRC_PER(dwc->src_id);
cfghi |= DWC_CFGH_DST_PER(dwc->dws.dst_id);
cfghi |= DWC_CFGH_SRC_PER(dwc->dws.src_id);
/* Set polarity of handshake interface */
cfglo |= hs_polarity ? DWC_CFGL_HS_DST_POL | DWC_CFGL_HS_SRC_POL : 0;
channel_writel(dwc, CFG_LO, cfglo);
channel_writel(dwc, CFG_HI, cfghi);
@ -209,7 +213,7 @@ static inline void dwc_do_single_block(struct dw_dma_chan *dwc,
static void dwc_dostart(struct dw_dma_chan *dwc, struct dw_desc *first)
{
struct dw_dma *dw = to_dw_dma(dwc->chan.device);
u8 lms = DWC_LLP_LMS(dwc->m_master);
u8 lms = DWC_LLP_LMS(dwc->dws.m_master);
unsigned long was_soft_llp;
/* ASSERT: channel is idle */
@ -662,7 +666,7 @@ dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
struct dw_desc *prev;
size_t xfer_count;
size_t offset;
u8 m_master = dwc->m_master;
u8 m_master = dwc->dws.m_master;
unsigned int src_width;
unsigned int dst_width;
unsigned int data_width = dw->pdata->data_width[m_master];
@ -740,7 +744,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
struct dw_desc *prev;
struct dw_desc *first;
u32 ctllo;
u8 m_master = dwc->m_master;
u8 m_master = dwc->dws.m_master;
u8 lms = DWC_LLP_LMS(m_master);
dma_addr_t reg;
unsigned int reg_width;
@ -895,12 +899,7 @@ bool dw_dma_filter(struct dma_chan *chan, void *param)
return false;
/* We have to copy data since dws can be temporary storage */
dwc->src_id = dws->src_id;
dwc->dst_id = dws->dst_id;
dwc->m_master = dws->m_master;
dwc->p_master = dws->p_master;
memcpy(&dwc->dws, dws, sizeof(struct dw_dma_slave));
return true;
}
@ -1167,11 +1166,7 @@ static void dwc_free_chan_resources(struct dma_chan *chan)
spin_lock_irqsave(&dwc->lock, flags);
/* Clear custom channel configuration */
dwc->src_id = 0;
dwc->dst_id = 0;
dwc->m_master = 0;
dwc->p_master = 0;
memset(&dwc->dws, 0, sizeof(struct dw_dma_slave));
clear_bit(DW_DMA_IS_INITIALIZED, &dwc->flags);
@ -1264,7 +1259,7 @@ struct dw_cyclic_desc *dw_dma_cyclic_prep(struct dma_chan *chan,
struct dw_cyclic_desc *retval = NULL;
struct dw_desc *desc;
struct dw_desc *last = NULL;
u8 lms = DWC_LLP_LMS(dwc->m_master);
u8 lms = DWC_LLP_LMS(dwc->dws.m_master);
unsigned long was_cyclic;
unsigned int reg_width;
unsigned int periods;
@ -1576,11 +1571,7 @@ int dw_dma_probe(struct dw_dma_chip *chip)
(dwc_params >> DWC_PARAMS_MBLK_EN & 0x1) == 0;
} else {
dwc->block_size = pdata->block_size;
/* Check if channel supports multi block transfer */
channel_writel(dwc, LLP, DWC_LLP_LOC(0xffffffff));
dwc->nollp = DWC_LLP_LOC(channel_readl(dwc, LLP)) == 0;
channel_writel(dwc, LLP, 0);
dwc->nollp = pdata->is_nollp;
}
}

View file

@ -245,10 +245,7 @@ struct dw_dma_chan {
bool nollp;
/* custom slave configuration */
u8 src_id;
u8 dst_id;
u8 m_master;
u8 p_master;
struct dw_dma_slave dws;
/* configuration passed via .device_config */
struct dma_slave_config dma_sconfig;

View file

@ -200,10 +200,9 @@ EXPORT_SYMBOL_GPL(hsu_dma_get_status);
* is not a normal timeout interrupt, ie. hsu_dma_get_status() returned 0.
*
* Return:
* IRQ_NONE for invalid channel number, IRQ_HANDLED otherwise.
* 0 for invalid channel number, 1 otherwise.
*/
irqreturn_t hsu_dma_do_irq(struct hsu_dma_chip *chip, unsigned short nr,
u32 status)
int hsu_dma_do_irq(struct hsu_dma_chip *chip, unsigned short nr, u32 status)
{
struct hsu_dma_chan *hsuc;
struct hsu_dma_desc *desc;
@ -211,7 +210,7 @@ irqreturn_t hsu_dma_do_irq(struct hsu_dma_chip *chip, unsigned short nr,
/* Sanity check */
if (nr >= chip->hsu->nr_channels)
return IRQ_NONE;
return 0;
hsuc = &chip->hsu->chan[nr];
@ -230,7 +229,7 @@ irqreturn_t hsu_dma_do_irq(struct hsu_dma_chip *chip, unsigned short nr,
}
spin_unlock_irqrestore(&hsuc->vchan.lock, flags);
return IRQ_HANDLED;
return 1;
}
EXPORT_SYMBOL_GPL(hsu_dma_do_irq);

View file

@ -29,7 +29,7 @@ static irqreturn_t hsu_pci_irq(int irq, void *dev)
u32 dmaisr;
u32 status;
unsigned short i;
irqreturn_t ret = IRQ_NONE;
int ret = 0;
int err;
dmaisr = readl(chip->regs + HSU_PCI_DMAISR);
@ -37,14 +37,14 @@ static irqreturn_t hsu_pci_irq(int irq, void *dev)
if (dmaisr & 0x1) {
err = hsu_dma_get_status(chip, i, &status);
if (err > 0)
ret |= IRQ_HANDLED;
ret |= 1;
else if (err == 0)
ret |= hsu_dma_do_irq(chip, i, status);
}
dmaisr >>= 1;
}
return ret;
return IRQ_RETVAL(ret);
}
static int hsu_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)

View file

@ -648,15 +648,11 @@ static void sdma_event_disable(struct sdma_channel *sdmac, unsigned int event)
writel_relaxed(val, sdma->regs + chnenbl);
}
static void sdma_handle_channel_loop(struct sdma_channel *sdmac)
{
if (sdmac->desc.callback)
sdmac->desc.callback(sdmac->desc.callback_param);
}
static void sdma_update_channel_loop(struct sdma_channel *sdmac)
{
struct sdma_buffer_descriptor *bd;
int error = 0;
enum dma_status old_status = sdmac->status;
/*
* loop mode. Iterate over descriptors, re-setup them and
@ -668,17 +664,42 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac)
if (bd->mode.status & BD_DONE)
break;
if (bd->mode.status & BD_RROR)
if (bd->mode.status & BD_RROR) {
bd->mode.status &= ~BD_RROR;
sdmac->status = DMA_ERROR;
error = -EIO;
}
/*
* We use bd->mode.count to calculate the residue, since contains
* the number of bytes present in the current buffer descriptor.
*/
sdmac->chn_real_count = bd->mode.count;
bd->mode.status |= BD_DONE;
bd->mode.count = sdmac->period_len;
/*
* The callback is called from the interrupt context in order
* to reduce latency and to avoid the risk of altering the
* SDMA transaction status by the time the client tasklet is
* executed.
*/
if (sdmac->desc.callback)
sdmac->desc.callback(sdmac->desc.callback_param);
sdmac->buf_tail++;
sdmac->buf_tail %= sdmac->num_bd;
if (error)
sdmac->status = old_status;
}
}
static void mxc_sdma_handle_channel_normal(struct sdma_channel *sdmac)
static void mxc_sdma_handle_channel_normal(unsigned long data)
{
struct sdma_channel *sdmac = (struct sdma_channel *) data;
struct sdma_buffer_descriptor *bd;
int i, error = 0;
@ -705,16 +726,6 @@ static void mxc_sdma_handle_channel_normal(struct sdma_channel *sdmac)
sdmac->desc.callback(sdmac->desc.callback_param);
}
static void sdma_tasklet(unsigned long data)
{
struct sdma_channel *sdmac = (struct sdma_channel *) data;
if (sdmac->flags & IMX_DMA_SG_LOOP)
sdma_handle_channel_loop(sdmac);
else
mxc_sdma_handle_channel_normal(sdmac);
}
static irqreturn_t sdma_int_handler(int irq, void *dev_id)
{
struct sdma_engine *sdma = dev_id;
@ -731,8 +742,8 @@ static irqreturn_t sdma_int_handler(int irq, void *dev_id)
if (sdmac->flags & IMX_DMA_SG_LOOP)
sdma_update_channel_loop(sdmac);
tasklet_schedule(&sdmac->tasklet);
else
tasklet_schedule(&sdmac->tasklet);
__clear_bit(channel, &stat);
}
@ -1353,7 +1364,8 @@ static enum dma_status sdma_tx_status(struct dma_chan *chan,
u32 residue;
if (sdmac->flags & IMX_DMA_SG_LOOP)
residue = (sdmac->num_bd - sdmac->buf_tail) * sdmac->period_len;
residue = (sdmac->num_bd - sdmac->buf_tail) *
sdmac->period_len - sdmac->chn_real_count;
else
residue = sdmac->chn_count - sdmac->chn_real_count;
@ -1732,7 +1744,7 @@ static int sdma_probe(struct platform_device *pdev)
dma_cookie_init(&sdmac->chan);
sdmac->channel = i;
tasklet_init(&sdmac->tasklet, sdma_tasklet,
tasklet_init(&sdmac->tasklet, mxc_sdma_handle_channel_normal,
(unsigned long) sdmac);
/*
* Add the channel to the DMAC list. Do not add channel 0 though

View file

@ -924,7 +924,7 @@ static inline void early_init_dt_check_for_initrd(unsigned long node)
#ifdef CONFIG_SERIAL_EARLYCON
static int __init early_init_dt_scan_chosen_serial(void)
int __init early_init_dt_scan_chosen_stdout(void)
{
int offset;
const char *p, *q, *options = NULL;
@ -968,15 +968,6 @@ static int __init early_init_dt_scan_chosen_serial(void)
}
return -ENODEV;
}
static int __init setup_of_earlycon(char *buf)
{
if (buf)
return 0;
return early_init_dt_scan_chosen_serial();
}
early_param("earlycon", setup_of_earlycon);
#endif
/**

View file

@ -800,7 +800,7 @@ static int ptmx_open(struct inode *inode, struct file *filp)
return retval;
}
static struct file_operations ptmx_fops;
static struct file_operations ptmx_fops __ro_after_init;
static void __init unix98_pty_init(void)
{

View file

@ -31,6 +31,11 @@ struct uart_8250_dma {
struct dma_chan *rxchan;
struct dma_chan *txchan;
/* Device address base for DMA operations */
phys_addr_t rx_dma_addr;
phys_addr_t tx_dma_addr;
/* DMA address of the buffer in memory */
dma_addr_t rx_addr;
dma_addr_t tx_addr;

View file

@ -639,7 +639,7 @@ static int univ8250_console_match(struct console *co, char *name, int idx,
{
char match[] = "uart"; /* 8250-specific earlycon name */
unsigned char iotype;
unsigned long addr;
resource_size_t addr;
int i;
if (strncmp(name, match, 4) != 0)

View file

@ -142,7 +142,7 @@ void serial8250_rx_dma_flush(struct uart_8250_port *p)
if (dma->rx_running) {
dmaengine_pause(dma->rxchan);
__dma_rx_complete(p);
dmaengine_terminate_all(dma->rxchan);
dmaengine_terminate_async(dma->rxchan);
}
}
EXPORT_SYMBOL_GPL(serial8250_rx_dma_flush);
@ -150,6 +150,10 @@ EXPORT_SYMBOL_GPL(serial8250_rx_dma_flush);
int serial8250_request_dma(struct uart_8250_port *p)
{
struct uart_8250_dma *dma = p->dma;
phys_addr_t rx_dma_addr = dma->rx_dma_addr ?
dma->rx_dma_addr : p->port.mapbase;
phys_addr_t tx_dma_addr = dma->tx_dma_addr ?
dma->tx_dma_addr : p->port.mapbase;
dma_cap_mask_t mask;
struct dma_slave_caps caps;
int ret;
@ -157,11 +161,11 @@ int serial8250_request_dma(struct uart_8250_port *p)
/* Default slave configuration parameters */
dma->rxconf.direction = DMA_DEV_TO_MEM;
dma->rxconf.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
dma->rxconf.src_addr = p->port.mapbase + UART_RX;
dma->rxconf.src_addr = rx_dma_addr + UART_RX;
dma->txconf.direction = DMA_MEM_TO_DEV;
dma->txconf.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
dma->txconf.dst_addr = p->port.mapbase + UART_TX;
dma->txconf.dst_addr = tx_dma_addr + UART_TX;
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
@ -247,14 +251,14 @@ void serial8250_release_dma(struct uart_8250_port *p)
return;
/* Release RX resources */
dmaengine_terminate_all(dma->rxchan);
dmaengine_terminate_sync(dma->rxchan);
dma_free_coherent(dma->rxchan->device->dev, dma->rx_size, dma->rx_buf,
dma->rx_addr);
dma_release_channel(dma->rxchan);
dma->rxchan = NULL;
/* Release TX resources */
dmaengine_terminate_all(dma->txchan);
dmaengine_terminate_sync(dma->txchan);
dma_unmap_single(dma->txchan->device->dev, dma->tx_addr,
UART_XMIT_SIZE, DMA_TO_DEVICE);
dma_release_channel(dma->txchan);

View file

@ -365,18 +365,19 @@ static int dw8250_probe(struct platform_device *pdev)
struct resource *regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
int irq = platform_get_irq(pdev, 0);
struct uart_port *p = &uart.port;
struct device *dev = &pdev->dev;
struct dw8250_data *data;
int err;
u32 val;
if (!regs) {
dev_err(&pdev->dev, "no registers defined\n");
dev_err(dev, "no registers defined\n");
return -EINVAL;
}
if (irq < 0) {
if (irq != -EPROBE_DEFER)
dev_err(&pdev->dev, "cannot get irq\n");
dev_err(dev, "cannot get irq\n");
return irq;
}
@ -387,16 +388,16 @@ static int dw8250_probe(struct platform_device *pdev)
p->pm = dw8250_do_pm;
p->type = PORT_8250;
p->flags = UPF_SHARE_IRQ | UPF_FIXED_PORT;
p->dev = &pdev->dev;
p->dev = dev;
p->iotype = UPIO_MEM;
p->serial_in = dw8250_serial_in;
p->serial_out = dw8250_serial_out;
p->membase = devm_ioremap(&pdev->dev, regs->start, resource_size(regs));
p->membase = devm_ioremap(dev, regs->start, resource_size(regs));
if (!p->membase)
return -ENOMEM;
data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
@ -404,57 +405,57 @@ static int dw8250_probe(struct platform_device *pdev)
data->usr_reg = DW_UART_USR;
p->private_data = data;
data->uart_16550_compatible = device_property_read_bool(p->dev,
data->uart_16550_compatible = device_property_read_bool(dev,
"snps,uart-16550-compatible");
err = device_property_read_u32(p->dev, "reg-shift", &val);
err = device_property_read_u32(dev, "reg-shift", &val);
if (!err)
p->regshift = val;
err = device_property_read_u32(p->dev, "reg-io-width", &val);
err = device_property_read_u32(dev, "reg-io-width", &val);
if (!err && val == 4) {
p->iotype = UPIO_MEM32;
p->serial_in = dw8250_serial_in32;
p->serial_out = dw8250_serial_out32;
}
if (device_property_read_bool(p->dev, "dcd-override")) {
if (device_property_read_bool(dev, "dcd-override")) {
/* Always report DCD as active */
data->msr_mask_on |= UART_MSR_DCD;
data->msr_mask_off |= UART_MSR_DDCD;
}
if (device_property_read_bool(p->dev, "dsr-override")) {
if (device_property_read_bool(dev, "dsr-override")) {
/* Always report DSR as active */
data->msr_mask_on |= UART_MSR_DSR;
data->msr_mask_off |= UART_MSR_DDSR;
}
if (device_property_read_bool(p->dev, "cts-override")) {
if (device_property_read_bool(dev, "cts-override")) {
/* Always report CTS as active */
data->msr_mask_on |= UART_MSR_CTS;
data->msr_mask_off |= UART_MSR_DCTS;
}
if (device_property_read_bool(p->dev, "ri-override")) {
if (device_property_read_bool(dev, "ri-override")) {
/* Always report Ring indicator as inactive */
data->msr_mask_off |= UART_MSR_RI;
data->msr_mask_off |= UART_MSR_TERI;
}
/* Always ask for fixed clock rate from a property. */
device_property_read_u32(p->dev, "clock-frequency", &p->uartclk);
device_property_read_u32(dev, "clock-frequency", &p->uartclk);
/* If there is separate baudclk, get the rate from it. */
data->clk = devm_clk_get(&pdev->dev, "baudclk");
data->clk = devm_clk_get(dev, "baudclk");
if (IS_ERR(data->clk) && PTR_ERR(data->clk) != -EPROBE_DEFER)
data->clk = devm_clk_get(&pdev->dev, NULL);
data->clk = devm_clk_get(dev, NULL);
if (IS_ERR(data->clk) && PTR_ERR(data->clk) == -EPROBE_DEFER)
return -EPROBE_DEFER;
if (!IS_ERR_OR_NULL(data->clk)) {
err = clk_prepare_enable(data->clk);
if (err)
dev_warn(&pdev->dev, "could not enable optional baudclk: %d\n",
dev_warn(dev, "could not enable optional baudclk: %d\n",
err);
else
p->uartclk = clk_get_rate(data->clk);
@ -462,24 +463,24 @@ static int dw8250_probe(struct platform_device *pdev)
/* If no clock rate is defined, fail. */
if (!p->uartclk) {
dev_err(&pdev->dev, "clock rate not defined\n");
dev_err(dev, "clock rate not defined\n");
return -EINVAL;
}
data->pclk = devm_clk_get(&pdev->dev, "apb_pclk");
if (IS_ERR(data->clk) && PTR_ERR(data->clk) == -EPROBE_DEFER) {
data->pclk = devm_clk_get(dev, "apb_pclk");
if (IS_ERR(data->pclk) && PTR_ERR(data->pclk) == -EPROBE_DEFER) {
err = -EPROBE_DEFER;
goto err_clk;
}
if (!IS_ERR(data->pclk)) {
err = clk_prepare_enable(data->pclk);
if (err) {
dev_err(&pdev->dev, "could not enable apb_pclk\n");
dev_err(dev, "could not enable apb_pclk\n");
goto err_clk;
}
}
data->rst = devm_reset_control_get_optional(&pdev->dev, NULL);
data->rst = devm_reset_control_get_optional(dev, NULL);
if (IS_ERR(data->rst) && PTR_ERR(data->rst) == -EPROBE_DEFER) {
err = -EPROBE_DEFER;
goto err_pclk;
@ -511,8 +512,8 @@ static int dw8250_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, data);
pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
return 0;
@ -624,6 +625,7 @@ static const struct acpi_device_id dw8250_acpi_match[] = {
{ "APMC0D08", 0},
{ "AMD0020", 0 },
{ "AMDI0020", 0 },
{ "HISI0031", 0 },
{ },
};
MODULE_DEVICE_TABLE(acpi, dw8250_acpi_match);

View file

@ -0,0 +1,378 @@
/*
* 8250_lpss.c - Driver for UART on Intel Braswell and various other Intel SoCs
*
* Copyright (C) 2016 Intel Corporation
* Author: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/bitops.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/rational.h>
#include <linux/dmaengine.h>
#include <linux/dma/dw.h>
#include "8250.h"
#define PCI_DEVICE_ID_INTEL_QRK_UARTx 0x0936
#define PCI_DEVICE_ID_INTEL_BYT_UART1 0x0f0a
#define PCI_DEVICE_ID_INTEL_BYT_UART2 0x0f0c
#define PCI_DEVICE_ID_INTEL_BSW_UART1 0x228a
#define PCI_DEVICE_ID_INTEL_BSW_UART2 0x228c
#define PCI_DEVICE_ID_INTEL_BDW_UART1 0x9ce3
#define PCI_DEVICE_ID_INTEL_BDW_UART2 0x9ce4
/* Intel LPSS specific registers */
#define BYT_PRV_CLK 0x800
#define BYT_PRV_CLK_EN BIT(0)
#define BYT_PRV_CLK_M_VAL_SHIFT 1
#define BYT_PRV_CLK_N_VAL_SHIFT 16
#define BYT_PRV_CLK_UPDATE BIT(31)
#define BYT_TX_OVF_INT 0x820
#define BYT_TX_OVF_INT_MASK BIT(1)
struct lpss8250;
struct lpss8250_board {
unsigned long freq;
unsigned int base_baud;
int (*setup)(struct lpss8250 *, struct uart_port *p);
void (*exit)(struct lpss8250 *);
};
struct lpss8250 {
int line;
struct lpss8250_board *board;
/* DMA parameters */
struct uart_8250_dma dma;
struct dw_dma_chip dma_chip;
struct dw_dma_slave dma_param;
u8 dma_maxburst;
};
static void byt_set_termios(struct uart_port *p, struct ktermios *termios,
struct ktermios *old)
{
unsigned int baud = tty_termios_baud_rate(termios);
struct lpss8250 *lpss = p->private_data;
unsigned long fref = lpss->board->freq, fuart = baud * 16;
unsigned long w = BIT(15) - 1;
unsigned long m, n;
u32 reg;
/* Gracefully handle the B0 case: fall back to B9600 */
fuart = fuart ? fuart : 9600 * 16;
/* Get Fuart closer to Fref */
fuart *= rounddown_pow_of_two(fref / fuart);
/*
* For baud rates 0.5M, 1M, 1.5M, 2M, 2.5M, 3M, 3.5M and 4M the
* dividers must be adjusted.
*
* uartclk = (m / n) * 100 MHz, where m <= n
*/
rational_best_approximation(fuart, fref, w, w, &m, &n);
p->uartclk = fuart;
/* Reset the clock */
reg = (m << BYT_PRV_CLK_M_VAL_SHIFT) | (n << BYT_PRV_CLK_N_VAL_SHIFT);
writel(reg, p->membase + BYT_PRV_CLK);
reg |= BYT_PRV_CLK_EN | BYT_PRV_CLK_UPDATE;
writel(reg, p->membase + BYT_PRV_CLK);
p->status &= ~UPSTAT_AUTOCTS;
if (termios->c_cflag & CRTSCTS)
p->status |= UPSTAT_AUTOCTS;
serial8250_do_set_termios(p, termios, old);
}
static unsigned int byt_get_mctrl(struct uart_port *port)
{
unsigned int ret = serial8250_do_get_mctrl(port);
/* Force DCD and DSR signals to permanently be reported as active */
ret |= TIOCM_CAR | TIOCM_DSR;
return ret;
}
static int byt_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
{
struct dw_dma_slave *param = &lpss->dma_param;
struct uart_8250_port *up = up_to_u8250p(port);
struct pci_dev *pdev = to_pci_dev(port->dev);
unsigned int dma_devfn = PCI_DEVFN(PCI_SLOT(pdev->devfn), 0);
struct pci_dev *dma_dev = pci_get_slot(pdev->bus, dma_devfn);
switch (pdev->device) {
case PCI_DEVICE_ID_INTEL_BYT_UART1:
case PCI_DEVICE_ID_INTEL_BSW_UART1:
case PCI_DEVICE_ID_INTEL_BDW_UART1:
param->src_id = 3;
param->dst_id = 2;
break;
case PCI_DEVICE_ID_INTEL_BYT_UART2:
case PCI_DEVICE_ID_INTEL_BSW_UART2:
case PCI_DEVICE_ID_INTEL_BDW_UART2:
param->src_id = 5;
param->dst_id = 4;
break;
default:
return -EINVAL;
}
param->dma_dev = &dma_dev->dev;
param->m_master = 0;
param->p_master = 1;
/* TODO: Detect FIFO size automaticaly for DesignWare 8250 */
port->fifosize = 64;
up->tx_loadsz = 64;
lpss->dma_maxburst = 16;
port->set_termios = byt_set_termios;
port->get_mctrl = byt_get_mctrl;
/* Disable TX counter interrupts */
writel(BYT_TX_OVF_INT_MASK, port->membase + BYT_TX_OVF_INT);
return 0;
}
#ifdef CONFIG_SERIAL_8250_DMA
static const struct dw_dma_platform_data qrk_serial_dma_pdata = {
.nr_channels = 2,
.is_private = true,
.is_nollp = true,
.chan_allocation_order = CHAN_ALLOCATION_ASCENDING,
.chan_priority = CHAN_PRIORITY_ASCENDING,
.block_size = 4095,
.nr_masters = 1,
.data_width = {4},
};
static void qrk_serial_setup_dma(struct lpss8250 *lpss, struct uart_port *port)
{
struct uart_8250_dma *dma = &lpss->dma;
struct dw_dma_chip *chip = &lpss->dma_chip;
struct dw_dma_slave *param = &lpss->dma_param;
struct pci_dev *pdev = to_pci_dev(port->dev);
int ret;
chip->dev = &pdev->dev;
chip->irq = pdev->irq;
chip->regs = pci_ioremap_bar(pdev, 1);
chip->pdata = &qrk_serial_dma_pdata;
/* Falling back to PIO mode if DMA probing fails */
ret = dw_dma_probe(chip);
if (ret)
return;
/* Special DMA address for UART */
dma->rx_dma_addr = 0xfffff000;
dma->tx_dma_addr = 0xfffff000;
param->dma_dev = &pdev->dev;
param->src_id = 0;
param->dst_id = 1;
param->hs_polarity = true;
lpss->dma_maxburst = 8;
}
static void qrk_serial_exit_dma(struct lpss8250 *lpss)
{
struct dw_dma_slave *param = &lpss->dma_param;
if (!param->dma_dev)
return;
dw_dma_remove(&lpss->dma_chip);
}
#else /* CONFIG_SERIAL_8250_DMA */
static void qrk_serial_setup_dma(struct lpss8250 *lpss, struct uart_port *port) {}
static void qrk_serial_exit_dma(struct lpss8250 *lpss) {}
#endif /* !CONFIG_SERIAL_8250_DMA */
static int qrk_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
{
struct pci_dev *pdev = to_pci_dev(port->dev);
int ret;
ret = pci_alloc_irq_vectors(pdev, 1, 1, 0);
if (ret < 0)
return ret;
port->irq = pci_irq_vector(pdev, 0);
qrk_serial_setup_dma(lpss, port);
return 0;
}
static void qrk_serial_exit(struct lpss8250 *lpss)
{
qrk_serial_exit_dma(lpss);
}
static bool lpss8250_dma_filter(struct dma_chan *chan, void *param)
{
struct dw_dma_slave *dws = param;
if (dws->dma_dev != chan->device->dev)
return false;
chan->private = dws;
return true;
}
static int lpss8250_dma_setup(struct lpss8250 *lpss, struct uart_8250_port *port)
{
struct uart_8250_dma *dma = &lpss->dma;
struct dw_dma_slave *rx_param, *tx_param;
struct device *dev = port->port.dev;
if (!lpss->dma_param.dma_dev)
return 0;
rx_param = devm_kzalloc(dev, sizeof(*rx_param), GFP_KERNEL);
if (!rx_param)
return -ENOMEM;
tx_param = devm_kzalloc(dev, sizeof(*tx_param), GFP_KERNEL);
if (!tx_param)
return -ENOMEM;
*rx_param = lpss->dma_param;
dma->rxconf.src_maxburst = lpss->dma_maxburst;
*tx_param = lpss->dma_param;
dma->txconf.dst_maxburst = lpss->dma_maxburst;
dma->fn = lpss8250_dma_filter;
dma->rx_param = rx_param;
dma->tx_param = tx_param;
port->dma = dma;
return 0;
}
static int lpss8250_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct uart_8250_port uart;
struct lpss8250 *lpss;
int ret;
ret = pcim_enable_device(pdev);
if (ret)
return ret;
pci_set_master(pdev);
lpss = devm_kzalloc(&pdev->dev, sizeof(*lpss), GFP_KERNEL);
if (!lpss)
return -ENOMEM;
lpss->board = (struct lpss8250_board *)id->driver_data;
memset(&uart, 0, sizeof(struct uart_8250_port));
uart.port.dev = &pdev->dev;
uart.port.irq = pdev->irq;
uart.port.private_data = lpss;
uart.port.type = PORT_16550A;
uart.port.iotype = UPIO_MEM;
uart.port.regshift = 2;
uart.port.uartclk = lpss->board->base_baud * 16;
uart.port.flags = UPF_SHARE_IRQ | UPF_FIXED_PORT | UPF_FIXED_TYPE;
uart.capabilities = UART_CAP_FIFO | UART_CAP_AFE;
uart.port.mapbase = pci_resource_start(pdev, 0);
uart.port.membase = pcim_iomap(pdev, 0, 0);
if (!uart.port.membase)
return -ENOMEM;
ret = lpss->board->setup(lpss, &uart.port);
if (ret)
return ret;
ret = lpss8250_dma_setup(lpss, &uart);
if (ret)
goto err_exit;
ret = serial8250_register_8250_port(&uart);
if (ret < 0)
goto err_exit;
lpss->line = ret;
pci_set_drvdata(pdev, lpss);
return 0;
err_exit:
if (lpss->board->exit)
lpss->board->exit(lpss);
return ret;
}
static void lpss8250_remove(struct pci_dev *pdev)
{
struct lpss8250 *lpss = pci_get_drvdata(pdev);
if (lpss->board->exit)
lpss->board->exit(lpss);
serial8250_unregister_port(lpss->line);
}
static const struct lpss8250_board byt_board = {
.freq = 100000000,
.base_baud = 2764800,
.setup = byt_serial_setup,
};
static const struct lpss8250_board qrk_board = {
.freq = 44236800,
.base_baud = 2764800,
.setup = qrk_serial_setup,
.exit = qrk_serial_exit,
};
#define LPSS_DEVICE(id, board) { PCI_VDEVICE(INTEL, id), (kernel_ulong_t)&board }
static const struct pci_device_id pci_ids[] = {
LPSS_DEVICE(PCI_DEVICE_ID_INTEL_QRK_UARTx, qrk_board),
LPSS_DEVICE(PCI_DEVICE_ID_INTEL_BYT_UART1, byt_board),
LPSS_DEVICE(PCI_DEVICE_ID_INTEL_BYT_UART2, byt_board),
LPSS_DEVICE(PCI_DEVICE_ID_INTEL_BSW_UART1, byt_board),
LPSS_DEVICE(PCI_DEVICE_ID_INTEL_BSW_UART2, byt_board),
LPSS_DEVICE(PCI_DEVICE_ID_INTEL_BDW_UART1, byt_board),
LPSS_DEVICE(PCI_DEVICE_ID_INTEL_BDW_UART2, byt_board),
{ },
};
MODULE_DEVICE_TABLE(pci, pci_ids);
static struct pci_driver lpss8250_pci_driver = {
.name = "8250_lpss",
.id_table = pci_ids,
.probe = lpss8250_probe,
.remove = lpss8250_remove,
};
module_pci_driver(lpss8250_pci_driver);
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Intel LPSS UART driver");

View file

@ -99,27 +99,27 @@ static int dnv_handle_irq(struct uart_port *p)
struct uart_8250_port *up = up_to_u8250p(p);
unsigned int fisr = serial_port_in(p, INTEL_MID_UART_DNV_FISR);
u32 status;
int ret = IRQ_NONE;
int ret = 0;
int err;
if (fisr & BIT(2)) {
err = hsu_dma_get_status(&mid->dma_chip, 1, &status);
if (err > 0) {
serial8250_rx_dma_flush(up);
ret |= IRQ_HANDLED;
ret |= 1;
} else if (err == 0)
ret |= hsu_dma_do_irq(&mid->dma_chip, 1, status);
}
if (fisr & BIT(1)) {
err = hsu_dma_get_status(&mid->dma_chip, 0, &status);
if (err > 0)
ret |= IRQ_HANDLED;
ret |= 1;
else if (err == 0)
ret |= hsu_dma_do_irq(&mid->dma_chip, 0, status);
}
if (fisr & BIT(0))
ret |= serial8250_handle_irq(p, serial_port_in(p, UART_IIR));
return ret;
return IRQ_RETVAL(ret);
}
#define DNV_DMA_CHAN_OFFSET 0x80

View file

@ -62,7 +62,7 @@ mtk8250_set_termios(struct uart_port *port, struct ktermios *termios,
*/
baud = uart_get_baud_rate(port, termios, old,
port->uartclk / 16 / 0xffff,
port->uartclk / 16);
port->uartclk);
if (baud <= 115200) {
serial_port_out(port, UART_MTK_HIGHS, 0x0);
@ -76,10 +76,6 @@ mtk8250_set_termios(struct uart_port *port, struct ktermios *termios,
quot = DIV_ROUND_UP(port->uartclk, 4 * baud);
} else {
serial_port_out(port, UART_MTK_HIGHS, 0x3);
/* Set to highest baudrate supported */
if (baud >= 1152000)
baud = 921600;
quot = DIV_ROUND_UP(port->uartclk, 256 * baud);
}

View file

@ -195,6 +195,7 @@ static int of_platform_serial_probe(struct platform_device *ofdev)
switch (port_type) {
case PORT_8250 ... PORT_MAX_8250:
{
u32 tx_threshold;
struct uart_8250_port port8250;
memset(&port8250, 0, sizeof(port8250));
port8250.port = port;
@ -202,6 +203,12 @@ static int of_platform_serial_probe(struct platform_device *ofdev)
if (port.fifosize)
port8250.capabilities = UART_CAP_FIFO;
/* Check for TX FIFO threshold & set tx_loadsz */
if ((of_property_read_u32(ofdev->dev.of_node, "tx-threshold",
&tx_threshold) == 0) &&
(tx_threshold < port.fifosize))
port8250.tx_loadsz = port.fifosize - tx_threshold;
if (of_property_read_bool(ofdev->dev.of_node,
"auto-flow-control"))
port8250.capabilities |= UART_CAP_AFE;

View file

@ -21,14 +21,10 @@
#include <linux/serial_core.h>
#include <linux/8250_pci.h>
#include <linux/bitops.h>
#include <linux/rational.h>
#include <asm/byteorder.h>
#include <asm/io.h>
#include <linux/dmaengine.h>
#include <linux/platform_data/dma-dw.h>
#include "8250.h"
/*
@ -1349,160 +1345,6 @@ ce4100_serial_setup(struct serial_private *priv,
return ret;
}
#define PCI_DEVICE_ID_INTEL_BYT_UART1 0x0f0a
#define PCI_DEVICE_ID_INTEL_BYT_UART2 0x0f0c
#define PCI_DEVICE_ID_INTEL_BSW_UART1 0x228a
#define PCI_DEVICE_ID_INTEL_BSW_UART2 0x228c
#define PCI_DEVICE_ID_INTEL_BDW_UART1 0x9ce3
#define PCI_DEVICE_ID_INTEL_BDW_UART2 0x9ce4
#define BYT_PRV_CLK 0x800
#define BYT_PRV_CLK_EN (1 << 0)
#define BYT_PRV_CLK_M_VAL_SHIFT 1
#define BYT_PRV_CLK_N_VAL_SHIFT 16
#define BYT_PRV_CLK_UPDATE (1 << 31)
#define BYT_TX_OVF_INT 0x820
#define BYT_TX_OVF_INT_MASK (1 << 1)
static void
byt_set_termios(struct uart_port *p, struct ktermios *termios,
struct ktermios *old)
{
unsigned int baud = tty_termios_baud_rate(termios);
unsigned long fref = 100000000, fuart = baud * 16;
unsigned long w = BIT(15) - 1;
unsigned long m, n;
u32 reg;
/* Gracefully handle the B0 case: fall back to B9600 */
fuart = fuart ? fuart : 9600 * 16;
/* Get Fuart closer to Fref */
fuart *= rounddown_pow_of_two(fref / fuart);
/*
* For baud rates 0.5M, 1M, 1.5M, 2M, 2.5M, 3M, 3.5M and 4M the
* dividers must be adjusted.
*
* uartclk = (m / n) * 100 MHz, where m <= n
*/
rational_best_approximation(fuart, fref, w, w, &m, &n);
p->uartclk = fuart;
/* Reset the clock */
reg = (m << BYT_PRV_CLK_M_VAL_SHIFT) | (n << BYT_PRV_CLK_N_VAL_SHIFT);
writel(reg, p->membase + BYT_PRV_CLK);
reg |= BYT_PRV_CLK_EN | BYT_PRV_CLK_UPDATE;
writel(reg, p->membase + BYT_PRV_CLK);
p->status &= ~UPSTAT_AUTOCTS;
if (termios->c_cflag & CRTSCTS)
p->status |= UPSTAT_AUTOCTS;
serial8250_do_set_termios(p, termios, old);
}
static bool byt_dma_filter(struct dma_chan *chan, void *param)
{
struct dw_dma_slave *dws = param;
if (dws->dma_dev != chan->device->dev)
return false;
chan->private = dws;
return true;
}
static unsigned int
byt_get_mctrl(struct uart_port *port)
{
unsigned int ret = serial8250_do_get_mctrl(port);
/* Force DCD and DSR signals to permanently be reported as active. */
ret |= TIOCM_CAR | TIOCM_DSR;
return ret;
}
static int
byt_serial_setup(struct serial_private *priv,
const struct pciserial_board *board,
struct uart_8250_port *port, int idx)
{
struct pci_dev *pdev = priv->dev;
struct device *dev = port->port.dev;
struct uart_8250_dma *dma;
struct dw_dma_slave *tx_param, *rx_param;
struct pci_dev *dma_dev;
int ret;
dma = devm_kzalloc(dev, sizeof(*dma), GFP_KERNEL);
if (!dma)
return -ENOMEM;
tx_param = devm_kzalloc(dev, sizeof(*tx_param), GFP_KERNEL);
if (!tx_param)
return -ENOMEM;
rx_param = devm_kzalloc(dev, sizeof(*rx_param), GFP_KERNEL);
if (!rx_param)
return -ENOMEM;
switch (pdev->device) {
case PCI_DEVICE_ID_INTEL_BYT_UART1:
case PCI_DEVICE_ID_INTEL_BSW_UART1:
case PCI_DEVICE_ID_INTEL_BDW_UART1:
rx_param->src_id = 3;
tx_param->dst_id = 2;
break;
case PCI_DEVICE_ID_INTEL_BYT_UART2:
case PCI_DEVICE_ID_INTEL_BSW_UART2:
case PCI_DEVICE_ID_INTEL_BDW_UART2:
rx_param->src_id = 5;
tx_param->dst_id = 4;
break;
default:
return -EINVAL;
}
rx_param->m_master = 0;
rx_param->p_master = 1;
dma->rxconf.src_maxburst = 16;
tx_param->m_master = 0;
tx_param->p_master = 1;
dma->txconf.dst_maxburst = 16;
dma_dev = pci_get_slot(pdev->bus, PCI_DEVFN(PCI_SLOT(pdev->devfn), 0));
rx_param->dma_dev = &dma_dev->dev;
tx_param->dma_dev = &dma_dev->dev;
dma->fn = byt_dma_filter;
dma->rx_param = rx_param;
dma->tx_param = tx_param;
ret = pci_default_setup(priv, board, port, idx);
port->port.iotype = UPIO_MEM;
port->port.type = PORT_16550A;
port->port.flags = (port->port.flags | UPF_FIXED_PORT | UPF_FIXED_TYPE);
port->port.set_termios = byt_set_termios;
port->port.get_mctrl = byt_get_mctrl;
port->port.fifosize = 64;
port->tx_loadsz = 64;
port->dma = dma;
port->capabilities = UART_CAP_FIFO | UART_CAP_AFE;
/* Disable Tx counter interrupts */
writel(BYT_TX_OVF_INT_MASK, port->port.membase + BYT_TX_OVF_INT);
return ret;
}
static int
pci_omegapci_setup(struct serial_private *priv,
const struct pciserial_board *board,
@ -1741,6 +1583,19 @@ static int pci_eg20t_init(struct pci_dev *dev)
#define PCI_DEVICE_ID_EXAR_XR17V4358 0x4358
#define PCI_DEVICE_ID_EXAR_XR17V8358 0x8358
#define UART_EXAR_MPIOINT_7_0 0x8f /* MPIOINT[7:0] */
#define UART_EXAR_MPIOLVL_7_0 0x90 /* MPIOLVL[7:0] */
#define UART_EXAR_MPIO3T_7_0 0x91 /* MPIO3T[7:0] */
#define UART_EXAR_MPIOINV_7_0 0x92 /* MPIOINV[7:0] */
#define UART_EXAR_MPIOSEL_7_0 0x93 /* MPIOSEL[7:0] */
#define UART_EXAR_MPIOOD_7_0 0x94 /* MPIOOD[7:0] */
#define UART_EXAR_MPIOINT_15_8 0x95 /* MPIOINT[15:8] */
#define UART_EXAR_MPIOLVL_15_8 0x96 /* MPIOLVL[15:8] */
#define UART_EXAR_MPIO3T_15_8 0x97 /* MPIO3T[15:8] */
#define UART_EXAR_MPIOINV_15_8 0x98 /* MPIOINV[15:8] */
#define UART_EXAR_MPIOSEL_15_8 0x99 /* MPIOSEL[15:8] */
#define UART_EXAR_MPIOOD_15_8 0x9a /* MPIOOD[15:8] */
static int
pci_xr17c154_setup(struct serial_private *priv,
const struct pciserial_board *board,
@ -1783,18 +1638,18 @@ pci_xr17v35x_setup(struct serial_private *priv,
* Setup Multipurpose Input/Output pins.
*/
if (idx == 0) {
writeb(0x00, p + 0x8f); /*MPIOINT[7:0]*/
writeb(0x00, p + 0x90); /*MPIOLVL[7:0]*/
writeb(0x00, p + 0x91); /*MPIO3T[7:0]*/
writeb(0x00, p + 0x92); /*MPIOINV[7:0]*/
writeb(0x00, p + 0x93); /*MPIOSEL[7:0]*/
writeb(0x00, p + 0x94); /*MPIOOD[7:0]*/
writeb(0x00, p + 0x95); /*MPIOINT[15:8]*/
writeb(0x00, p + 0x96); /*MPIOLVL[15:8]*/
writeb(0x00, p + 0x97); /*MPIO3T[15:8]*/
writeb(0x00, p + 0x98); /*MPIOINV[15:8]*/
writeb(0x00, p + 0x99); /*MPIOSEL[15:8]*/
writeb(0x00, p + 0x9a); /*MPIOOD[15:8]*/
writeb(0x00, p + UART_EXAR_MPIOINT_7_0);
writeb(0x00, p + UART_EXAR_MPIOLVL_7_0);
writeb(0x00, p + UART_EXAR_MPIO3T_7_0);
writeb(0x00, p + UART_EXAR_MPIOINV_7_0);
writeb(0x00, p + UART_EXAR_MPIOSEL_7_0);
writeb(0x00, p + UART_EXAR_MPIOOD_7_0);
writeb(0x00, p + UART_EXAR_MPIOINT_15_8);
writeb(0x00, p + UART_EXAR_MPIOLVL_15_8);
writeb(0x00, p + UART_EXAR_MPIO3T_15_8);
writeb(0x00, p + UART_EXAR_MPIOINV_15_8);
writeb(0x00, p + UART_EXAR_MPIOSEL_15_8);
writeb(0x00, p + UART_EXAR_MPIOOD_15_8);
}
writeb(0x00, p + UART_EXAR_8XMODE);
writeb(UART_FCTR_EXAR_TRGD, p + UART_EXAR_FCTR);
@ -1830,20 +1685,20 @@ pci_fastcom335_setup(struct serial_private *priv,
switch (priv->dev->device) {
case PCI_DEVICE_ID_COMMTECH_4222PCI335:
case PCI_DEVICE_ID_COMMTECH_4224PCI335:
writeb(0x78, p + 0x90); /* MPIOLVL[7:0] */
writeb(0x00, p + 0x92); /* MPIOINV[7:0] */
writeb(0x00, p + 0x93); /* MPIOSEL[7:0] */
writeb(0x78, p + UART_EXAR_MPIOLVL_7_0);
writeb(0x00, p + UART_EXAR_MPIOINV_7_0);
writeb(0x00, p + UART_EXAR_MPIOSEL_7_0);
break;
case PCI_DEVICE_ID_COMMTECH_2324PCI335:
case PCI_DEVICE_ID_COMMTECH_2328PCI335:
writeb(0x00, p + 0x90); /* MPIOLVL[7:0] */
writeb(0xc0, p + 0x92); /* MPIOINV[7:0] */
writeb(0xc0, p + 0x93); /* MPIOSEL[7:0] */
writeb(0x00, p + UART_EXAR_MPIOLVL_7_0);
writeb(0xc0, p + UART_EXAR_MPIOINV_7_0);
writeb(0xc0, p + UART_EXAR_MPIOSEL_7_0);
break;
}
writeb(0x00, p + 0x8f); /* MPIOINT[7:0] */
writeb(0x00, p + 0x91); /* MPIO3T[7:0] */
writeb(0x00, p + 0x94); /* MPIOOD[7:0] */
writeb(0x00, p + UART_EXAR_MPIOINT_7_0);
writeb(0x00, p + UART_EXAR_MPIO3T_7_0);
writeb(0x00, p + UART_EXAR_MPIOOD_7_0);
}
writeb(0x00, p + UART_EXAR_8XMODE);
writeb(UART_FCTR_EXAR_TRGD, p + UART_EXAR_FCTR);
@ -1934,7 +1789,6 @@ pci_wch_ch38x_setup(struct serial_private *priv,
#define PCI_DEVICE_ID_COMMTECH_4222PCIE 0x0022
#define PCI_DEVICE_ID_BROADCOM_TRUMANAGE 0x160a
#define PCI_DEVICE_ID_AMCC_ADDIDATA_APCI7800 0x818e
#define PCI_DEVICE_ID_INTEL_QRK_UART 0x0936
#define PCI_VENDOR_ID_SUNIX 0x1fd4
#define PCI_DEVICE_ID_SUNIX_1999 0x1999
@ -2078,48 +1932,6 @@ static struct pci_serial_quirk pci_serial_quirks[] __refdata = {
.subdevice = PCI_ANY_ID,
.setup = kt_serial_setup,
},
{
.vendor = PCI_VENDOR_ID_INTEL,
.device = PCI_DEVICE_ID_INTEL_BYT_UART1,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.setup = byt_serial_setup,
},
{
.vendor = PCI_VENDOR_ID_INTEL,
.device = PCI_DEVICE_ID_INTEL_BYT_UART2,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.setup = byt_serial_setup,
},
{
.vendor = PCI_VENDOR_ID_INTEL,
.device = PCI_DEVICE_ID_INTEL_BSW_UART1,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.setup = byt_serial_setup,
},
{
.vendor = PCI_VENDOR_ID_INTEL,
.device = PCI_DEVICE_ID_INTEL_BSW_UART2,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.setup = byt_serial_setup,
},
{
.vendor = PCI_VENDOR_ID_INTEL,
.device = PCI_DEVICE_ID_INTEL_BDW_UART1,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.setup = byt_serial_setup,
},
{
.vendor = PCI_VENDOR_ID_INTEL,
.device = PCI_DEVICE_ID_INTEL_BDW_UART2,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.setup = byt_serial_setup,
},
/*
* ITE
*/
@ -2992,8 +2804,6 @@ enum pci_board_num_t {
pbn_ADDIDATA_PCIe_4_3906250,
pbn_ADDIDATA_PCIe_8_3906250,
pbn_ce4100_1_115200,
pbn_byt,
pbn_qrk,
pbn_omegapci,
pbn_NETMOS9900_2s_115200,
pbn_brcm_trumanage,
@ -3769,18 +3579,6 @@ static struct pciserial_board pci_boards[] = {
.base_baud = 921600,
.reg_shift = 2,
},
[pbn_byt] = {
.flags = FL_BASE0,
.num_ports = 1,
.base_baud = 2764800,
.reg_shift = 2,
},
[pbn_qrk] = {
.flags = FL_BASE0,
.num_ports = 1,
.base_baud = 2764800,
.reg_shift = 2,
},
[pbn_omegapci] = {
.flags = FL_BASE0,
.num_ports = 8,
@ -3892,6 +3690,15 @@ static const struct pci_device_id blacklist[] = {
{ PCI_VDEVICE(INTEL, 0x081d), },
{ PCI_VDEVICE(INTEL, 0x1191), },
{ PCI_VDEVICE(INTEL, 0x19d8), },
/* Intel platforms with DesignWare UART */
{ PCI_VDEVICE(INTEL, 0x0936), },
{ PCI_VDEVICE(INTEL, 0x0f0a), },
{ PCI_VDEVICE(INTEL, 0x0f0c), },
{ PCI_VDEVICE(INTEL, 0x228a), },
{ PCI_VDEVICE(INTEL, 0x228c), },
{ PCI_VDEVICE(INTEL, 0x9ce3), },
{ PCI_VDEVICE(INTEL, 0x9ce4), },
};
/*
@ -5659,40 +5466,7 @@ static struct pci_device_id serial_pci_tbl[] = {
{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CE4100_UART,
PCI_ANY_ID, PCI_ANY_ID, 0, 0,
pbn_ce4100_1_115200 },
/* Intel BayTrail */
{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BYT_UART1,
PCI_ANY_ID, PCI_ANY_ID,
PCI_CLASS_COMMUNICATION_SERIAL << 8, 0xff0000,
pbn_byt },
{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BYT_UART2,
PCI_ANY_ID, PCI_ANY_ID,
PCI_CLASS_COMMUNICATION_SERIAL << 8, 0xff0000,
pbn_byt },
{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BSW_UART1,
PCI_ANY_ID, PCI_ANY_ID,
PCI_CLASS_COMMUNICATION_SERIAL << 8, 0xff0000,
pbn_byt },
{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BSW_UART2,
PCI_ANY_ID, PCI_ANY_ID,
PCI_CLASS_COMMUNICATION_SERIAL << 8, 0xff0000,
pbn_byt },
/* Intel Broadwell */
{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BDW_UART1,
PCI_ANY_ID, PCI_ANY_ID,
PCI_CLASS_COMMUNICATION_SERIAL << 8, 0xff0000,
pbn_byt },
{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BDW_UART2,
PCI_ANY_ID, PCI_ANY_ID,
PCI_CLASS_COMMUNICATION_SERIAL << 8, 0xff0000,
pbn_byt },
/*
* Intel Quark x1000
*/
{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_QRK_UART,
PCI_ANY_ID, PCI_ANY_ID, 0, 0,
pbn_qrk },
/*
* Cronyx Omega PCI
*/

View file

@ -178,7 +178,7 @@ static const struct serial8250_config uart_config[] = {
.fifo_size = 16,
.tx_loadsz = 16,
.fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_00,
.flags = UART_CAP_FIFO | UART_CAP_AFE,
.flags = UART_CAP_FIFO /* | UART_CAP_AFE */,
},
[PORT_U6_16550A] = {
.name = "U6_16550A",
@ -585,11 +585,11 @@ EXPORT_SYMBOL_GPL(serial8250_rpm_put);
*/
int serial8250_em485_init(struct uart_8250_port *p)
{
if (p->em485 != NULL)
if (p->em485)
return 0;
p->em485 = kmalloc(sizeof(struct uart_8250_em485), GFP_ATOMIC);
if (p->em485 == NULL)
if (!p->em485)
return -ENOMEM;
setup_timer(&p->em485->stop_tx_timer,
@ -619,7 +619,7 @@ EXPORT_SYMBOL_GPL(serial8250_em485_init);
*/
void serial8250_em485_destroy(struct uart_8250_port *p)
{
if (p->em485 == NULL)
if (!p->em485)
return;
del_timer(&p->em485->start_tx_timer);
@ -1402,10 +1402,8 @@ static void serial8250_stop_rx(struct uart_port *port)
static void __do_stop_tx_rs485(struct uart_8250_port *p)
{
if (!p->em485)
return;
serial8250_em485_rts_after_send(p);
/*
* Empty the RX FIFO, we are not interested in anything
* received during the half-duplex transmission.
@ -1414,12 +1412,8 @@ static void __do_stop_tx_rs485(struct uart_8250_port *p)
if (!(p->port.rs485.flags & SER_RS485_RX_DURING_TX)) {
serial8250_clear_fifos(p);
serial8250_rpm_get(p);
p->ier |= UART_IER_RLSI | UART_IER_RDI;
serial_port_out(&p->port, UART_IER, p->ier);
serial8250_rpm_put(p);
}
}
@ -1429,6 +1423,7 @@ static void serial8250_em485_handle_stop_tx(unsigned long arg)
struct uart_8250_em485 *em485 = p->em485;
unsigned long flags;
serial8250_rpm_get(p);
spin_lock_irqsave(&p->port.lock, flags);
if (em485 &&
em485->active_timer == &em485->stop_tx_timer) {
@ -1436,15 +1431,13 @@ static void serial8250_em485_handle_stop_tx(unsigned long arg)
em485->active_timer = NULL;
}
spin_unlock_irqrestore(&p->port.lock, flags);
serial8250_rpm_put(p);
}
static void __stop_tx_rs485(struct uart_8250_port *p)
{
struct uart_8250_em485 *em485 = p->em485;
if (!em485)
return;
/*
* __do_stop_tx_rs485 is going to set RTS according to config
* AND flush RX FIFO if required.
@ -1475,7 +1468,7 @@ static inline void __stop_tx(struct uart_8250_port *p)
unsigned char lsr = serial_in(p, UART_LSR);
/*
* To provide required timeing and allow FIFO transfer,
* __stop_tx_rs485 must be called only when both FIFO and
* __stop_tx_rs485() must be called only when both FIFO and
* shift register are empty. It is for device driver to enable
* interrupt on TEMT.
*/
@ -1484,9 +1477,10 @@ static inline void __stop_tx(struct uart_8250_port *p)
del_timer(&em485->start_tx_timer);
em485->active_timer = NULL;
__stop_tx_rs485(p);
}
__do_stop_tx(p);
__stop_tx_rs485(p);
}
static void serial8250_stop_tx(struct uart_port *port)
@ -1876,6 +1870,30 @@ static int exar_handle_irq(struct uart_port *port)
return ret;
}
/*
* Newer 16550 compatible parts such as the SC16C650 & Altera 16550 Soft IP
* have a programmable TX threshold that triggers the THRE interrupt in
* the IIR register. In this case, the THRE interrupt indicates the FIFO
* has space available. Load it up with tx_loadsz bytes.
*/
static int serial8250_tx_threshold_handle_irq(struct uart_port *port)
{
unsigned long flags;
unsigned int iir = serial_port_in(port, UART_IIR);
/* TX Threshold IRQ triggered so load up FIFO */
if ((iir & UART_IIR_ID) == UART_IIR_THRI) {
struct uart_8250_port *up = up_to_u8250p(port);
spin_lock_irqsave(&port->lock, flags);
serial8250_tx_chars(up);
spin_unlock_irqrestore(&port->lock, flags);
}
iir = serial_port_in(port, UART_IIR);
return serial8250_handle_irq(port, iir);
}
static unsigned int serial8250_tx_empty(struct uart_port *port)
{
struct uart_8250_port *up = up_to_u8250p(port);
@ -1988,6 +2006,7 @@ static void wait_for_xmitr(struct uart_8250_port *up, int bits)
if (--tmout == 0)
break;
udelay(1);
touch_nmi_watchdog();
}
/* Wait up to 1s for flow control if necessary */
@ -2164,6 +2183,25 @@ int serial8250_do_startup(struct uart_port *port)
serial_port_out(port, UART_LCR, 0);
}
/*
* For the Altera 16550 variants, set TX threshold trigger level.
*/
if (((port->type == PORT_ALTR_16550_F32) ||
(port->type == PORT_ALTR_16550_F64) ||
(port->type == PORT_ALTR_16550_F128)) && (port->fifosize > 1)) {
/* Bounds checking of TX threshold (valid 0 to fifosize-2) */
if ((up->tx_loadsz < 2) || (up->tx_loadsz > port->fifosize)) {
pr_err("ttyS%d TX FIFO Threshold errors, skipping\n",
serial_index(port));
} else {
serial_port_out(port, UART_ALTR_AFR,
UART_ALTR_EN_TXFIFO_LW);
serial_port_out(port, UART_ALTR_TX_LOW,
port->fifosize - up->tx_loadsz);
port->handle_irq = serial8250_tx_threshold_handle_irq;
}
}
if (port->irq) {
unsigned char iir1;
/*
@ -2499,8 +2537,6 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
struct ktermios *termios,
struct ktermios *old)
{
unsigned int tolerance = port->uartclk / 100;
/*
* Ask the core to calculate the divisor for us.
* Allow 1% tolerance at the upper limit so uart clks marginally
@ -2509,7 +2545,7 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
*/
return uart_get_baud_rate(port, termios, old,
port->uartclk / 16 / 0xffff,
(port->uartclk + tolerance) / 16);
port->uartclk);
}
void
@ -2546,12 +2582,9 @@ serial8250_do_set_termios(struct uart_port *port, struct ktermios *termios,
/*
* MCR-based auto flow control. When AFE is enabled, RTS will be
* deasserted when the receive FIFO contains more characters than
* the trigger, or the MCR RTS bit is cleared. In the case where
* the remote UART is not using CTS auto flow control, we must
* have sufficient FIFO entries for the latency of the remote
* UART to respond. IOW, at least 32 bytes of FIFO.
* the trigger, or the MCR RTS bit is cleared.
*/
if (up->capabilities & UART_CAP_AFE && port->fifosize >= 32) {
if (up->capabilities & UART_CAP_AFE) {
up->mcr &= ~UART_MCR_AFE;
if (termios->c_cflag & CRTSCTS)
up->mcr |= UART_MCR_AFE;

View file

@ -120,7 +120,6 @@ config SERIAL_8250_PCI
tristate "8250/16550 PCI device support" if EXPERT
depends on SERIAL_8250 && PCI
default SERIAL_8250
select RATIONAL
help
This builds standard PCI serial support. You may be able to
disable this feature if you only need legacy serial support.
@ -402,6 +401,21 @@ config SERIAL_8250_INGENIC
If you have a system using an Ingenic SoC and wish to make use of
its UARTs, say Y to this option. If unsure, say N.
config SERIAL_8250_LPSS
tristate "Support for serial ports on Intel LPSS platforms" if EXPERT
default SERIAL_8250
depends on SERIAL_8250 && PCI
depends on X86 || COMPILE_TEST
select DW_DMAC_CORE if SERIAL_8250_DMA
select DW_DMAC_PCI if (SERIAL_8250_DMA && X86_INTEL_LPSS)
select RATIONAL
help
Selecting this option will enable handling of the extra features
present on the UART found on various Intel platforms such as:
- Intel Baytrail SoC
- Intel Braswell SoC
- Intel Quark X1000 SoC
config SERIAL_8250_MID
tristate "Support for serial ports on Intel MID platforms" if EXPERT
default SERIAL_8250

View file

@ -28,6 +28,7 @@ obj-$(CONFIG_SERIAL_8250_LPC18XX) += 8250_lpc18xx.o
obj-$(CONFIG_SERIAL_8250_MT6577) += 8250_mtk.o
obj-$(CONFIG_SERIAL_8250_UNIPHIER) += 8250_uniphier.o
obj-$(CONFIG_SERIAL_8250_INGENIC) += 8250_ingenic.o
obj-$(CONFIG_SERIAL_8250_LPSS) += 8250_lpss.o
obj-$(CONFIG_SERIAL_8250_MID) += 8250_mid.o
obj-$(CONFIG_SERIAL_8250_MOXA) += 8250_moxa.o
obj-$(CONFIG_SERIAL_OF_PLATFORM) += 8250_of.o

View file

@ -280,7 +280,7 @@ static int altera_jtaguart_verify_port(struct uart_port *port,
/*
* Define the basic serial functions we support.
*/
static struct uart_ops altera_jtaguart_ops = {
static const struct uart_ops altera_jtaguart_ops = {
.tx_empty = altera_jtaguart_tx_empty,
.get_mctrl = altera_jtaguart_get_mctrl,
.set_mctrl = altera_jtaguart_set_mctrl,

View file

@ -404,7 +404,7 @@ static void altera_uart_poll_put_char(struct uart_port *port, unsigned char c)
/*
* Define the basic serial functions we support.
*/
static struct uart_ops altera_uart_ops = {
static const struct uart_ops altera_uart_ops = {
.tx_empty = altera_uart_tx_empty,
.get_mctrl = altera_uart_get_mctrl,
.set_mctrl = altera_uart_set_mctrl,

View file

@ -93,6 +93,10 @@ static u16 pl011_std_offsets[REG_ARRAY_SIZE] = {
struct vendor_data {
const u16 *reg_offset;
unsigned int ifls;
unsigned int fr_busy;
unsigned int fr_dsr;
unsigned int fr_cts;
unsigned int fr_ri;
bool access_32b;
bool oversampling;
bool dma_threshold;
@ -111,6 +115,10 @@ static unsigned int get_fifosize_arm(struct amba_device *dev)
static struct vendor_data vendor_arm = {
.reg_offset = pl011_std_offsets,
.ifls = UART011_IFLS_RX4_8|UART011_IFLS_TX4_8,
.fr_busy = UART01x_FR_BUSY,
.fr_dsr = UART01x_FR_DSR,
.fr_cts = UART01x_FR_CTS,
.fr_ri = UART011_FR_RI,
.oversampling = false,
.dma_threshold = false,
.cts_event_workaround = false,
@ -121,6 +129,10 @@ static struct vendor_data vendor_arm = {
static struct vendor_data vendor_sbsa = {
.reg_offset = pl011_std_offsets,
.fr_busy = UART01x_FR_BUSY,
.fr_dsr = UART01x_FR_DSR,
.fr_cts = UART01x_FR_CTS,
.fr_ri = UART011_FR_RI,
.access_32b = true,
.oversampling = false,
.dma_threshold = false,
@ -164,6 +176,10 @@ static unsigned int get_fifosize_st(struct amba_device *dev)
static struct vendor_data vendor_st = {
.reg_offset = pl011_st_offsets,
.ifls = UART011_IFLS_RX_HALF|UART011_IFLS_TX_HALF,
.fr_busy = UART01x_FR_BUSY,
.fr_dsr = UART01x_FR_DSR,
.fr_cts = UART01x_FR_CTS,
.fr_ri = UART011_FR_RI,
.oversampling = true,
.dma_threshold = true,
.cts_event_workaround = true,
@ -188,11 +204,20 @@ static const u16 pl011_zte_offsets[REG_ARRAY_SIZE] = {
[REG_DMACR] = ZX_UART011_DMACR,
};
static struct vendor_data vendor_zte __maybe_unused = {
static unsigned int get_fifosize_zte(struct amba_device *dev)
{
return 16;
}
static struct vendor_data vendor_zte = {
.reg_offset = pl011_zte_offsets,
.access_32b = true,
.ifls = UART011_IFLS_RX4_8|UART011_IFLS_TX4_8,
.get_fifosize = get_fifosize_arm,
.fr_busy = ZX_UART01x_FR_BUSY,
.fr_dsr = ZX_UART01x_FR_DSR,
.fr_cts = ZX_UART01x_FR_CTS,
.fr_ri = ZX_UART011_FR_RI,
.get_fifosize = get_fifosize_zte,
};
/* Deals with DMA transactions */
@ -1167,7 +1192,7 @@ static void pl011_dma_shutdown(struct uart_amba_port *uap)
return;
/* Disable RX and TX DMA */
while (pl011_read(uap, REG_FR) & UART01x_FR_BUSY)
while (pl011_read(uap, REG_FR) & uap->vendor->fr_busy)
cpu_relax();
spin_lock_irq(&uap->port.lock);
@ -1416,11 +1441,12 @@ static void pl011_modem_status(struct uart_amba_port *uap)
if (delta & UART01x_FR_DCD)
uart_handle_dcd_change(&uap->port, status & UART01x_FR_DCD);
if (delta & UART01x_FR_DSR)
if (delta & uap->vendor->fr_dsr)
uap->port.icount.dsr++;
if (delta & UART01x_FR_CTS)
uart_handle_cts_change(&uap->port, status & UART01x_FR_CTS);
if (delta & uap->vendor->fr_cts)
uart_handle_cts_change(&uap->port,
status & uap->vendor->fr_cts);
wake_up_interruptible(&uap->port.state->port.delta_msr_wait);
}
@ -1493,7 +1519,8 @@ static unsigned int pl011_tx_empty(struct uart_port *port)
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
unsigned int status = pl011_read(uap, REG_FR);
return status & (UART01x_FR_BUSY|UART01x_FR_TXFF) ? 0 : TIOCSER_TEMT;
return status & (uap->vendor->fr_busy | UART01x_FR_TXFF) ?
0 : TIOCSER_TEMT;
}
static unsigned int pl011_get_mctrl(struct uart_port *port)
@ -1508,9 +1535,9 @@ static unsigned int pl011_get_mctrl(struct uart_port *port)
result |= tiocmbit
TIOCMBIT(UART01x_FR_DCD, TIOCM_CAR);
TIOCMBIT(UART01x_FR_DSR, TIOCM_DSR);
TIOCMBIT(UART01x_FR_CTS, TIOCM_CTS);
TIOCMBIT(UART011_FR_RI, TIOCM_RNG);
TIOCMBIT(uap->vendor->fr_dsr, TIOCM_DSR);
TIOCMBIT(uap->vendor->fr_cts, TIOCM_CTS);
TIOCMBIT(uap->vendor->fr_ri, TIOCM_RNG);
#undef TIOCMBIT
return result;
}
@ -2191,7 +2218,7 @@ pl011_console_write(struct console *co, const char *s, unsigned int count)
* Finally, wait for transmitter to become empty
* and restore the TCR
*/
while (pl011_read(uap, REG_FR) & UART01x_FR_BUSY)
while (pl011_read(uap, REG_FR) & uap->vendor->fr_busy)
cpu_relax();
if (!uap->vendor->always_enabled)
pl011_write(old_cr, uap, REG_CR);
@ -2555,7 +2582,8 @@ static int sbsa_uart_probe(struct platform_device *pdev)
ret = platform_get_irq(pdev, 0);
if (ret < 0) {
dev_err(&pdev->dev, "cannot obtain irq\n");
if (ret != -EPROBE_DEFER)
dev_err(&pdev->dev, "cannot obtain irq\n");
return ret;
}
uap->port.irq = ret;
@ -2622,6 +2650,11 @@ static struct amba_id pl011_ids[] = {
.mask = 0x00ffffff,
.data = &vendor_st,
},
{
.id = AMBA_LINUX_ID(0x00, 0x1, 0xffe),
.mask = 0x00ffffff,
.data = &vendor_zte,
},
{ 0, 0 },
};

View file

@ -464,7 +464,7 @@ static int arc_serial_poll_getchar(struct uart_port *port)
}
#endif
static struct uart_ops arc_serial_pops = {
static const struct uart_ops arc_serial_pops = {
.tx_empty = arc_serial_tx_empty,
.set_mctrl = arc_serial_set_mctrl,
.get_mctrl = arc_serial_get_mctrl,

View file

@ -166,6 +166,7 @@ struct atmel_uart_port {
u32 rts_low;
bool ms_irq_enabled;
u32 rtor; /* address of receiver timeout register if it exists */
bool has_frac_baudrate;
bool has_hw_timer;
struct timer_list uart_timer;
@ -1634,8 +1635,8 @@ static void atmel_init_property(struct atmel_uart_port *atmel_port,
if (np) {
/* DMA/PDC usage specification */
if (of_get_property(np, "atmel,use-dma-rx", NULL)) {
if (of_get_property(np, "dmas", NULL)) {
if (of_property_read_bool(np, "atmel,use-dma-rx")) {
if (of_property_read_bool(np, "dmas")) {
atmel_port->use_dma_rx = true;
atmel_port->use_pdc_rx = false;
} else {
@ -1647,8 +1648,8 @@ static void atmel_init_property(struct atmel_uart_port *atmel_port,
atmel_port->use_pdc_rx = false;
}
if (of_get_property(np, "atmel,use-dma-tx", NULL)) {
if (of_get_property(np, "dmas", NULL)) {
if (of_property_read_bool(np, "atmel,use-dma-tx")) {
if (of_property_read_bool(np, "dmas")) {
atmel_port->use_dma_tx = true;
atmel_port->use_pdc_tx = false;
} else {
@ -1745,6 +1746,11 @@ static void atmel_get_ip_name(struct uart_port *port)
dbgu_uart = 0x44424755; /* DBGU */
new_uart = 0x55415254; /* UART */
/*
* Only USART devices from at91sam9260 SOC implement fractional
* baudrate.
*/
atmel_port->has_frac_baudrate = false;
atmel_port->has_hw_timer = false;
if (name == new_uart) {
@ -1753,6 +1759,7 @@ static void atmel_get_ip_name(struct uart_port *port)
atmel_port->rtor = ATMEL_UA_RTOR;
} else if (name == usart) {
dev_dbg(port->dev, "Usart\n");
atmel_port->has_frac_baudrate = true;
atmel_port->has_hw_timer = true;
atmel_port->rtor = ATMEL_US_RTOR;
} else if (name == dbgu_uart) {
@ -1764,6 +1771,7 @@ static void atmel_get_ip_name(struct uart_port *port)
case 0x302:
case 0x10213:
dev_dbg(port->dev, "This version is usart\n");
atmel_port->has_frac_baudrate = true;
atmel_port->has_hw_timer = true;
atmel_port->rtor = ATMEL_US_RTOR;
break;
@ -1929,6 +1937,9 @@ static void atmel_shutdown(struct uart_port *port)
{
struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
/* Disable modem control lines interrupts */
atmel_disable_ms(port);
/* Disable interrupts at device level */
atmel_uart_writel(port, ATMEL_US_IDR, -1);
@ -1979,8 +1990,6 @@ static void atmel_shutdown(struct uart_port *port)
*/
free_irq(port->irq, port);
atmel_port->ms_irq_enabled = false;
atmel_flush_buffer(port);
}
@ -2025,8 +2034,9 @@ static void atmel_serial_pm(struct uart_port *port, unsigned int state,
static void atmel_set_termios(struct uart_port *port, struct ktermios *termios,
struct ktermios *old)
{
struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
unsigned long flags;
unsigned int old_mode, mode, imr, quot, baud;
unsigned int old_mode, mode, imr, quot, baud, div, cd, fp = 0;
/* save the current mode register */
mode = old_mode = atmel_uart_readl(port, ATMEL_US_MR);
@ -2036,12 +2046,6 @@ static void atmel_set_termios(struct uart_port *port, struct ktermios *termios,
ATMEL_US_PAR | ATMEL_US_USMODE);
baud = uart_get_baud_rate(port, termios, old, 0, port->uartclk / 16);
quot = uart_get_divisor(port, baud);
if (quot > 65535) { /* BRGR is 16-bit, so switch to slower clock */
quot /= 8;
mode |= ATMEL_US_USCLKS_MCK_DIV8;
}
/* byte size */
switch (termios->c_cflag & CSIZE) {
@ -2160,7 +2164,31 @@ static void atmel_set_termios(struct uart_port *port, struct ktermios *termios,
atmel_uart_writel(port, ATMEL_US_CR, rts_state);
}
/* set the baud rate */
/*
* Set the baud rate:
* Fractional baudrate allows to setup output frequency more
* accurately. This feature is enabled only when using normal mode.
* baudrate = selected clock / (8 * (2 - OVER) * (CD + FP / 8))
* Currently, OVER is always set to 0 so we get
* baudrate = selected clock / (16 * (CD + FP / 8))
* then
* 8 CD + FP = selected clock / (2 * baudrate)
*/
if (atmel_port->has_frac_baudrate &&
(mode & ATMEL_US_USMODE) == ATMEL_US_USMODE_NORMAL) {
div = DIV_ROUND_CLOSEST(port->uartclk, baud * 2);
cd = div >> 3;
fp = div & ATMEL_US_FP_MASK;
} else {
cd = uart_get_divisor(port, baud);
}
if (cd > 65535) { /* BRGR is 16-bit, so switch to slower clock */
cd /= 8;
mode |= ATMEL_US_USCLKS_MCK_DIV8;
}
quot = cd | fp << ATMEL_US_FP_OFFSET;
atmel_uart_writel(port, ATMEL_US_BRGR, quot);
atmel_uart_writel(port, ATMEL_US_CR, ATMEL_US_RSTSTA | ATMEL_US_RSTRX);
atmel_uart_writel(port, ATMEL_US_CR, ATMEL_US_TXEN | ATMEL_US_RXEN);
@ -2292,7 +2320,7 @@ static void atmel_poll_put_char(struct uart_port *port, unsigned char ch)
}
#endif
static struct uart_ops atmel_pops = {
static const struct uart_ops atmel_pops = {
.tx_empty = atmel_tx_empty,
.set_mctrl = atmel_set_mctrl,
.get_mctrl = atmel_get_mctrl,

View file

@ -631,7 +631,7 @@ static int bcm_uart_verify_port(struct uart_port *port,
}
/* serial core callbacks */
static struct uart_ops bcm_uart_ops = {
static const struct uart_ops bcm_uart_ops = {
.tx_empty = bcm_uart_tx_empty,
.get_mctrl = bcm_uart_get_mctrl,
.set_mctrl = bcm_uart_set_mctrl,

View file

@ -53,7 +53,8 @@ static void smh_write(struct console *con, const char *s, unsigned n)
uart_console_write(&dev->port, s, n, smh_putc);
}
int __init early_smh_setup(struct earlycon_device *device, const char *opt)
static int
__init early_smh_setup(struct earlycon_device *device, const char *opt)
{
device->con->write = smh_write;
return 0;

View file

@ -21,6 +21,7 @@
#include <linux/sizes.h>
#include <linux/of.h>
#include <linux/of_fdt.h>
#include <linux/acpi.h>
#ifdef CONFIG_FIX_EARLYCON_MEM
#include <asm/fixmap.h>
@ -38,7 +39,7 @@ static struct earlycon_device early_console_dev = {
.con = &early_con,
};
static void __iomem * __init earlycon_map(unsigned long paddr, size_t size)
static void __iomem * __init earlycon_map(resource_size_t paddr, size_t size)
{
void __iomem *base;
#ifdef CONFIG_FIX_EARLYCON_MEM
@ -49,8 +50,7 @@ static void __iomem * __init earlycon_map(unsigned long paddr, size_t size)
base = ioremap(paddr, size);
#endif
if (!base)
pr_err("%s: Couldn't map 0x%llx\n", __func__,
(unsigned long long)paddr);
pr_err("%s: Couldn't map %pa\n", __func__, &paddr);
return base;
}
@ -92,7 +92,7 @@ static int __init parse_options(struct earlycon_device *device, char *options)
{
struct uart_port *port = &device->port;
int length;
unsigned long addr;
resource_size_t addr;
if (uart_parse_earlycon(options, &port->iotype, &addr, &options))
return -EINVAL;
@ -199,6 +199,14 @@ int __init setup_earlycon(char *buf)
return -ENOENT;
}
/*
* When CONFIG_ACPI_SPCR_TABLE is defined, "earlycon" without parameters in
* command line does not start DT earlycon immediately, instead it defers
* starting it until DT/ACPI decision is made. At that time if ACPI is enabled
* call parse_spcr(), else call early_init_dt_scan_chosen_stdout()
*/
bool earlycon_init_is_deferred __initdata;
/* early_param wrapper for setup_earlycon() */
static int __init param_setup_earlycon(char *buf)
{
@ -208,8 +216,14 @@ static int __init param_setup_earlycon(char *buf)
* Just 'earlycon' is a valid param for devicetree earlycons;
* don't generate a warning from parse_early_params() in that case
*/
if (!buf || !buf[0])
return 0;
if (!buf || !buf[0]) {
if (IS_ENABLED(CONFIG_ACPI_SPCR_TABLE)) {
earlycon_init_is_deferred = true;
return 0;
} else {
return early_init_dt_scan_chosen_stdout();
}
}
err = setup_earlycon(buf);
if (err == -ENOENT || err == -EALREADY)

File diff suppressed because it is too large Load diff

View file

@ -190,6 +190,7 @@
enum imx_uart_type {
IMX1_UART,
IMX21_UART,
IMX53_UART,
IMX6Q_UART,
};
@ -222,6 +223,9 @@ struct imx_port {
struct dma_chan *dma_chan_rx, *dma_chan_tx;
struct scatterlist rx_sgl, tx_sgl[2];
void *rx_buf;
struct circ_buf rx_ring;
unsigned int rx_periods;
dma_cookie_t rx_cookie;
unsigned int tx_bytes;
unsigned int dma_tx_nents;
wait_queue_head_t dma_wait;
@ -244,6 +248,10 @@ static struct imx_uart_data imx_uart_devdata[] = {
.uts_reg = IMX21_UTS,
.devtype = IMX21_UART,
},
[IMX53_UART] = {
.uts_reg = IMX21_UTS,
.devtype = IMX53_UART,
},
[IMX6Q_UART] = {
.uts_reg = IMX21_UTS,
.devtype = IMX6Q_UART,
@ -257,6 +265,9 @@ static const struct platform_device_id imx_uart_devtype[] = {
}, {
.name = "imx21-uart",
.driver_data = (kernel_ulong_t) &imx_uart_devdata[IMX21_UART],
}, {
.name = "imx53-uart",
.driver_data = (kernel_ulong_t) &imx_uart_devdata[IMX53_UART],
}, {
.name = "imx6q-uart",
.driver_data = (kernel_ulong_t) &imx_uart_devdata[IMX6Q_UART],
@ -268,6 +279,7 @@ MODULE_DEVICE_TABLE(platform, imx_uart_devtype);
static const struct of_device_id imx_uart_dt_ids[] = {
{ .compatible = "fsl,imx6q-uart", .data = &imx_uart_devdata[IMX6Q_UART], },
{ .compatible = "fsl,imx53-uart", .data = &imx_uart_devdata[IMX53_UART], },
{ .compatible = "fsl,imx1-uart", .data = &imx_uart_devdata[IMX1_UART], },
{ .compatible = "fsl,imx21-uart", .data = &imx_uart_devdata[IMX21_UART], },
{ /* sentinel */ }
@ -289,6 +301,11 @@ static inline int is_imx21_uart(struct imx_port *sport)
return sport->devdata->devtype == IMX21_UART;
}
static inline int is_imx53_uart(struct imx_port *sport)
{
return sport->devdata->devtype == IMX53_UART;
}
static inline int is_imx6q_uart(struct imx_port *sport)
{
return sport->devdata->devtype == IMX6Q_UART;
@ -701,6 +718,7 @@ static irqreturn_t imx_rxint(int irq, void *dev_id)
return IRQ_HANDLED;
}
static void clear_rx_errors(struct imx_port *sport);
static int start_rx_dma(struct imx_port *sport);
/*
* If the RXFIFO is filled with some data, and then we
@ -726,6 +744,11 @@ static void imx_dma_rxint(struct imx_port *sport)
temp &= ~(UCR2_ATEN);
writel(temp, sport->port.membase + UCR2);
/* disable the rx errors interrupts */
temp = readl(sport->port.membase + UCR4);
temp &= ~UCR4_OREN;
writel(temp, sport->port.membase + UCR4);
/* tell the DMA to receive the data. */
start_rx_dma(sport);
}
@ -740,12 +763,13 @@ static unsigned int imx_get_hwmctrl(struct imx_port *sport)
{
unsigned int tmp = TIOCM_DSR;
unsigned usr1 = readl(sport->port.membase + USR1);
unsigned usr2 = readl(sport->port.membase + USR2);
if (usr1 & USR1_RTSS)
tmp |= TIOCM_CTS;
/* in DCE mode DCDIN is always 0 */
if (!(usr1 & USR2_DCDIN))
if (!(usr2 & USR2_DCDIN))
tmp |= TIOCM_CAR;
if (sport->dte_mode)
@ -932,30 +956,6 @@ static void imx_timeout(unsigned long data)
}
#define RX_BUF_SIZE (PAGE_SIZE)
static void imx_rx_dma_done(struct imx_port *sport)
{
unsigned long temp;
unsigned long flags;
spin_lock_irqsave(&sport->port.lock, flags);
/* re-enable interrupts to get notified when new symbols are incoming */
temp = readl(sport->port.membase + UCR1);
temp |= UCR1_RRDYEN;
writel(temp, sport->port.membase + UCR1);
temp = readl(sport->port.membase + UCR2);
temp |= UCR2_ATEN;
writel(temp, sport->port.membase + UCR2);
sport->dma_is_rxing = 0;
/* Is the shutdown waiting for us? */
if (waitqueue_active(&sport->dma_wait))
wake_up(&sport->dma_wait);
spin_unlock_irqrestore(&sport->port.lock, flags);
}
/*
* There are two kinds of RX DMA interrupts(such as in the MX6Q):
@ -972,43 +972,76 @@ static void dma_rx_callback(void *data)
struct scatterlist *sgl = &sport->rx_sgl;
struct tty_port *port = &sport->port.state->port;
struct dma_tx_state state;
struct circ_buf *rx_ring = &sport->rx_ring;
enum dma_status status;
unsigned int count;
/* unmap it first */
dma_unmap_sg(sport->port.dev, sgl, 1, DMA_FROM_DEVICE);
unsigned int w_bytes = 0;
unsigned int r_bytes;
unsigned int bd_size;
status = dmaengine_tx_status(chan, (dma_cookie_t)0, &state);
count = RX_BUF_SIZE - state.residue;
dev_dbg(sport->port.dev, "We get %d bytes.\n", count);
if (count) {
if (!(sport->port.ignore_status_mask & URXD_DUMMY_READ)) {
int bytes = tty_insert_flip_string(port, sport->rx_buf,
count);
if (bytes != count)
sport->port.icount.buf_overrun++;
}
tty_flip_buffer_push(port);
sport->port.icount.rx += count;
if (status == DMA_ERROR) {
dev_err(sport->port.dev, "DMA transaction error.\n");
clear_rx_errors(sport);
return;
}
/*
* Restart RX DMA directly if more data is available in order to skip
* the roundtrip through the IRQ handler. If there is some data already
* in the FIFO, DMA needs to be restarted soon anyways.
*
* Otherwise stop the DMA and reactivate FIFO IRQs to restart DMA once
* data starts to arrive again.
*/
if (readl(sport->port.membase + USR2) & USR2_RDR)
start_rx_dma(sport);
else
imx_rx_dma_done(sport);
if (!(sport->port.ignore_status_mask & URXD_DUMMY_READ)) {
/*
* The state-residue variable represents the empty space
* relative to the entire buffer. Taking this in consideration
* the head is always calculated base on the buffer total
* length - DMA transaction residue. The UART script from the
* SDMA firmware will jump to the next buffer descriptor,
* once a DMA transaction if finalized (IMX53 RM - A.4.1.2.4).
* Taking this in consideration the tail is always at the
* beginning of the buffer descriptor that contains the head.
*/
/* Calculate the head */
rx_ring->head = sg_dma_len(sgl) - state.residue;
/* Calculate the tail. */
bd_size = sg_dma_len(sgl) / sport->rx_periods;
rx_ring->tail = ((rx_ring->head-1) / bd_size) * bd_size;
if (rx_ring->head <= sg_dma_len(sgl) &&
rx_ring->head > rx_ring->tail) {
/* Move data from tail to head */
r_bytes = rx_ring->head - rx_ring->tail;
/* CPU claims ownership of RX DMA buffer */
dma_sync_sg_for_cpu(sport->port.dev, sgl, 1,
DMA_FROM_DEVICE);
w_bytes = tty_insert_flip_string(port,
sport->rx_buf + rx_ring->tail, r_bytes);
/* UART retrieves ownership of RX DMA buffer */
dma_sync_sg_for_device(sport->port.dev, sgl, 1,
DMA_FROM_DEVICE);
if (w_bytes != r_bytes)
sport->port.icount.buf_overrun++;
sport->port.icount.rx += w_bytes;
} else {
WARN_ON(rx_ring->head > sg_dma_len(sgl));
WARN_ON(rx_ring->head <= rx_ring->tail);
}
}
if (w_bytes) {
tty_flip_buffer_push(port);
dev_dbg(sport->port.dev, "We get %d bytes.\n", w_bytes);
}
}
/* RX DMA buffer periods */
#define RX_DMA_PERIODS 4
static int start_rx_dma(struct imx_port *sport)
{
struct scatterlist *sgl = &sport->rx_sgl;
@ -1017,14 +1050,21 @@ static int start_rx_dma(struct imx_port *sport)
struct dma_async_tx_descriptor *desc;
int ret;
sport->rx_ring.head = 0;
sport->rx_ring.tail = 0;
sport->rx_periods = RX_DMA_PERIODS;
sg_init_one(sgl, sport->rx_buf, RX_BUF_SIZE);
ret = dma_map_sg(dev, sgl, 1, DMA_FROM_DEVICE);
if (ret == 0) {
dev_err(dev, "DMA mapping error for RX.\n");
return -EINVAL;
}
desc = dmaengine_prep_slave_sg(chan, sgl, 1, DMA_DEV_TO_MEM,
DMA_PREP_INTERRUPT);
desc = dmaengine_prep_dma_cyclic(chan, sg_dma_address(sgl),
sg_dma_len(sgl), sg_dma_len(sgl) / sport->rx_periods,
DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT);
if (!desc) {
dma_unmap_sg(dev, sgl, 1, DMA_FROM_DEVICE);
dev_err(dev, "We cannot prepare for the RX slave dma!\n");
@ -1034,11 +1074,36 @@ static int start_rx_dma(struct imx_port *sport)
desc->callback_param = sport;
dev_dbg(dev, "RX: prepare for the DMA.\n");
dmaengine_submit(desc);
sport->rx_cookie = dmaengine_submit(desc);
dma_async_issue_pending(chan);
return 0;
}
static void clear_rx_errors(struct imx_port *sport)
{
unsigned int status_usr1, status_usr2;
status_usr1 = readl(sport->port.membase + USR1);
status_usr2 = readl(sport->port.membase + USR2);
if (status_usr2 & USR2_BRCD) {
sport->port.icount.brk++;
writel(USR2_BRCD, sport->port.membase + USR2);
} else if (status_usr1 & USR1_FRAMERR) {
sport->port.icount.frame++;
writel(USR1_FRAMERR, sport->port.membase + USR1);
} else if (status_usr1 & USR1_PARITYERR) {
sport->port.icount.parity++;
writel(USR1_PARITYERR, sport->port.membase + USR1);
}
if (status_usr2 & USR2_ORE) {
sport->port.icount.overrun++;
writel(USR2_ORE, sport->port.membase + USR2);
}
}
#define TXTL_DEFAULT 2 /* reset default */
#define RXTL_DEFAULT 1 /* reset default */
#define TXTL_DMA 8 /* DMA burst setting */
@ -1058,14 +1123,16 @@ static void imx_setup_ufcr(struct imx_port *sport,
static void imx_uart_dma_exit(struct imx_port *sport)
{
if (sport->dma_chan_rx) {
dmaengine_terminate_sync(sport->dma_chan_rx);
dma_release_channel(sport->dma_chan_rx);
sport->dma_chan_rx = NULL;
sport->rx_cookie = -EINVAL;
kfree(sport->rx_buf);
sport->rx_buf = NULL;
}
if (sport->dma_chan_tx) {
dmaengine_terminate_sync(sport->dma_chan_tx);
dma_release_channel(sport->dma_chan_tx);
sport->dma_chan_tx = NULL;
}
@ -1103,6 +1170,7 @@ static int imx_uart_dma_init(struct imx_port *sport)
ret = -ENOMEM;
goto err;
}
sport->rx_ring.buf = sport->rx_buf;
/* Prepare for TX : */
sport->dma_chan_tx = dma_request_slave_channel(dev, "tx");
@ -1201,8 +1269,7 @@ static int imx_startup(struct uart_port *port)
writel(temp & ~UCR4_DREN, sport->port.membase + UCR4);
/* Can we enable the DMA support? */
if (is_imx6q_uart(sport) && !uart_console(port) &&
!sport->dma_is_inited)
if (!uart_console(port) && !sport->dma_is_inited)
imx_uart_dma_init(sport);
spin_lock_irqsave(&sport->port.lock, flags);
@ -1283,17 +1350,11 @@ static void imx_shutdown(struct uart_port *port)
unsigned long flags;
if (sport->dma_is_enabled) {
int ret;
sport->dma_is_rxing = 0;
sport->dma_is_txing = 0;
dmaengine_terminate_sync(sport->dma_chan_tx);
dmaengine_terminate_sync(sport->dma_chan_rx);
/* We have to wait for the DMA to finish. */
ret = wait_event_interruptible(sport->dma_wait,
!sport->dma_is_rxing && !sport->dma_is_txing);
if (ret != 0) {
sport->dma_is_rxing = 0;
sport->dma_is_txing = 0;
dmaengine_terminate_all(sport->dma_chan_tx);
dmaengine_terminate_all(sport->dma_chan_rx);
}
spin_lock_irqsave(&sport->port.lock, flags);
imx_stop_tx(port);
imx_stop_rx(port);
@ -1690,7 +1751,7 @@ static int imx_rs485_config(struct uart_port *port,
return 0;
}
static struct uart_ops imx_pops = {
static const struct uart_ops imx_pops = {
.tx_empty = imx_tx_empty,
.set_mctrl = imx_set_mctrl,
.get_mctrl = imx_get_mctrl,
@ -2077,8 +2138,10 @@ static int serial_imx_probe(struct platform_device *pdev)
/* For register access, we only need to enable the ipg clock. */
ret = clk_prepare_enable(sport->clk_ipg);
if (ret)
if (ret) {
dev_err(&pdev->dev, "failed to enable per clk: %d\n", ret);
return ret;
}
/* Disable interrupts before requesting them */
reg = readl_relaxed(sport->port.membase + UCR1);
@ -2095,18 +2158,26 @@ static int serial_imx_probe(struct platform_device *pdev)
if (txirq > 0) {
ret = devm_request_irq(&pdev->dev, rxirq, imx_rxint, 0,
dev_name(&pdev->dev), sport);
if (ret)
if (ret) {
dev_err(&pdev->dev, "failed to request rx irq: %d\n",
ret);
return ret;
}
ret = devm_request_irq(&pdev->dev, txirq, imx_txint, 0,
dev_name(&pdev->dev), sport);
if (ret)
if (ret) {
dev_err(&pdev->dev, "failed to request tx irq: %d\n",
ret);
return ret;
}
} else {
ret = devm_request_irq(&pdev->dev, rxirq, imx_int, 0,
dev_name(&pdev->dev), sport);
if (ret)
if (ret) {
dev_err(&pdev->dev, "failed to request irq: %d\n", ret);
return ret;
}
}
imx_ports[sport->port.line] = sport;

View file

@ -346,7 +346,7 @@ static void jsm_config_port(struct uart_port *port, int flags)
port->type = PORT_JSM;
}
static struct uart_ops jsm_ops = {
static const struct uart_ops jsm_ops = {
.tx_empty = jsm_tty_tx_empty,
.set_mctrl = jsm_tty_set_mctrl,
.get_mctrl = jsm_tty_get_mctrl,

View file

@ -712,7 +712,7 @@ static void max3100_break_ctl(struct uart_port *port, int break_state)
dev_dbg(&s->spi->dev, "%s\n", __func__);
}
static struct uart_ops max3100_ops = {
static const struct uart_ops max3100_ops = {
.tx_empty = max3100_tx_empty,
.set_mctrl = max3100_set_mctrl,
.get_mctrl = max3100_get_mctrl,

View file

@ -1329,9 +1329,9 @@ static int max310x_spi_probe(struct spi_device *spi)
const struct spi_device_id *id_entry = spi_get_device_id(spi);
devtype = (struct max310x_devtype *)id_entry->driver_data;
flags = IRQF_TRIGGER_FALLING;
}
flags = IRQF_TRIGGER_FALLING;
regcfg.max_register = devtype->nr * 0x20 - 1;
regmap = devm_regmap_init_spi(spi, &regcfg);

View file

@ -775,7 +775,7 @@ static int men_z135_verify_port(struct uart_port *port,
return -EINVAL;
}
static struct uart_ops men_z135_ops = {
static const struct uart_ops men_z135_ops = {
.tx_empty = men_z135_tx_empty,
.set_mctrl = men_z135_set_mctrl,
.get_mctrl = men_z135_get_mctrl,

View file

@ -1317,7 +1317,7 @@ static void mxs_auart_break_ctl(struct uart_port *u, int ctl)
mxs_clr(AUART_LINECTRL_BRK, s, REG_LINECTRL);
}
static struct uart_ops mxs_auart_ops = {
static const struct uart_ops mxs_auart_ops = {
.tx_empty = mxs_auart_tx_empty,
.start_tx = mxs_auart_start_tx,
.stop_tx = mxs_auart_stop_tx,
@ -1510,10 +1510,7 @@ static int mxs_get_clks(struct mxs_auart_port *s,
if (!is_asm9260_auart(s)) {
s->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(s->clk))
return PTR_ERR(s->clk);
return 0;
return PTR_ERR_OR_ZERO(s->clk);
}
s->clk = devm_clk_get(s->dev, "mod");
@ -1537,16 +1534,20 @@ static int mxs_get_clks(struct mxs_auart_port *s,
err = clk_set_rate(s->clk, clk_get_rate(s->clk_ahb));
if (err) {
dev_err(s->dev, "Failed to set rate!\n");
return err;
goto disable_clk_ahb;
}
err = clk_prepare_enable(s->clk);
if (err) {
dev_err(s->dev, "Failed to enable clk!\n");
return err;
goto disable_clk_ahb;
}
return 0;
disable_clk_ahb:
clk_disable_unprepare(s->clk_ahb);
return err;
}
/*

View file

@ -1604,7 +1604,7 @@ static void pch_uart_put_poll_char(struct uart_port *port,
}
#endif /* CONFIG_CONSOLE_POLL */
static struct uart_ops pch_uart_ops = {
static const struct uart_ops pch_uart_ops = {
.tx_empty = pch_uart_tx_empty,
.set_mctrl = pch_uart_set_mctrl,
.get_mctrl = pch_uart_get_mctrl,

View file

@ -1577,7 +1577,7 @@ static void s3c24xx_serial_resetport(struct uart_port *port,
}
#ifdef CONFIG_CPU_FREQ
#ifdef CONFIG_ARM_S3C24XX_CPUFREQ
static int s3c24xx_serial_cpufreq_transition(struct notifier_block *nb,
unsigned long val, void *data)

View file

@ -102,7 +102,7 @@ struct s3c24xx_uart_port {
struct s3c24xx_uart_dma *dma;
#ifdef CONFIG_CPU_FREQ
#ifdef CONFIG_ARM_S3C24XX_CPUFREQ
struct notifier_block freq_transition;
#endif
};

View file

@ -1205,6 +1205,10 @@ static int sc16is7xx_probe(struct device *dev,
}
#endif
/* reset device, purging any pending irq / data */
regmap_write(s->regmap, SC16IS7XX_IOCONTROL_REG << SC16IS7XX_REG_SHIFT,
SC16IS7XX_IOCONTROL_SRESET_BIT);
for (i = 0; i < devtype->nr_uart; ++i) {
s->p[i].line = i;
/* Initialize port data */
@ -1234,6 +1238,22 @@ static int sc16is7xx_probe(struct device *dev,
init_kthread_work(&s->p[i].reg_work, sc16is7xx_reg_proc);
/* Register port */
uart_add_one_port(&sc16is7xx_uart, &s->p[i].port);
/* Enable EFR */
sc16is7xx_port_write(&s->p[i].port, SC16IS7XX_LCR_REG,
SC16IS7XX_LCR_CONF_MODE_B);
regcache_cache_bypass(s->regmap, true);
/* Enable write access to enhanced features */
sc16is7xx_port_write(&s->p[i].port, SC16IS7XX_EFR_REG,
SC16IS7XX_EFR_ENABLE_BIT);
regcache_cache_bypass(s->regmap, false);
/* Restore access to general registers */
sc16is7xx_port_write(&s->p[i].port, SC16IS7XX_LCR_REG, 0x00);
/* Go to suspend mode */
sc16is7xx_power(&s->p[i].port, 0);
}

View file

@ -235,18 +235,9 @@ static int uart_startup(struct tty_struct *tty, struct uart_state *state,
if (tty_port_initialized(port))
return 0;
/*
* Set the TTY IO error marker - we will only clear this
* once we have successfully opened the port.
*/
set_bit(TTY_IO_ERROR, &tty->flags);
retval = uart_port_startup(tty, state, init_hw);
if (!retval) {
tty_port_set_initialized(port, 1);
clear_bit(TTY_IO_ERROR, &tty->flags);
} else if (retval > 0)
retval = 0;
if (retval)
set_bit(TTY_IO_ERROR, &tty->flags);
return retval;
}
@ -972,8 +963,11 @@ static int uart_set_info(struct tty_struct *tty, struct tty_port *port,
}
uart_change_speed(tty, state, NULL);
}
} else
} else {
retval = uart_startup(tty, state, 1);
if (retval > 0)
retval = 0;
}
exit:
return retval;
}
@ -1139,6 +1133,8 @@ static int uart_do_autoconfig(struct tty_struct *tty,struct uart_state *state)
uport->ops->config_port(uport, flags);
ret = uart_startup(tty, state, 1);
if (ret > 0)
ret = 0;
}
out:
mutex_unlock(&port->mutex);
@ -1465,7 +1461,6 @@ static void uart_close(struct tty_struct *tty, struct file *filp)
{
struct uart_state *state = tty->driver_data;
struct tty_port *port;
struct uart_port *uport;
if (!state) {
struct uart_driver *drv = tty->driver->driver_state;
@ -1481,56 +1476,36 @@ static void uart_close(struct tty_struct *tty, struct file *filp)
port = &state->port;
pr_debug("uart_close(%d) called\n", tty->index);
if (tty_port_close_start(port, tty, filp) == 0)
return;
tty_port_close(tty->port, tty, filp);
}
mutex_lock(&port->mutex);
uport = uart_port_check(state);
static void uart_tty_port_shutdown(struct tty_port *port)
{
struct uart_state *state = container_of(port, struct uart_state, port);
struct uart_port *uport = uart_port_check(state);
/*
* At this point, we stop accepting input. To do this, we
* disable the receive line status interrupts.
*/
if (tty_port_initialized(port) &&
!WARN(!uport, "detached port still initialized!\n")) {
spin_lock_irq(&uport->lock);
uport->ops->stop_rx(uport);
spin_unlock_irq(&uport->lock);
/*
* Before we drop DTR, make sure the UART transmitter
* has completely drained; this is especially
* important if there is a transmit FIFO!
*/
uart_wait_until_sent(tty, uport->timeout);
}
if (WARN(!uport, "detached port still initialized!\n"))
return;
uart_shutdown(tty, state);
tty_port_tty_set(port, NULL);
spin_lock_irq(&uport->lock);
uport->ops->stop_rx(uport);
spin_unlock_irq(&uport->lock);
spin_lock_irq(&port->lock);
if (port->blocked_open) {
spin_unlock_irq(&port->lock);
if (port->close_delay)
msleep_interruptible(jiffies_to_msecs(port->close_delay));
spin_lock_irq(&port->lock);
} else if (uport && !uart_console(uport)) {
spin_unlock_irq(&port->lock);
uart_change_pm(state, UART_PM_STATE_OFF);
spin_lock_irq(&port->lock);
}
spin_unlock_irq(&port->lock);
tty_port_set_active(port, 0);
uart_port_shutdown(port);
/*
* Wake up anyone trying to open this port.
* It's possible for shutdown to be called after suspend if we get
* a DCD drop (hangup) at just the right time. Clear suspended bit so
* we don't try to resume a port that has been shutdown.
*/
wake_up_interruptible(&port->open_wait);
tty_port_set_suspended(port, 0);
mutex_unlock(&port->mutex);
uart_change_pm(state, UART_PM_STATE_OFF);
tty_ldisc_flush(tty);
tty->closing = 0;
}
static void uart_wait_until_sent(struct tty_struct *tty, int timeout)
@ -1711,52 +1686,31 @@ static int uart_open(struct tty_struct *tty, struct file *filp)
struct uart_driver *drv = tty->driver->driver_state;
int retval, line = tty->index;
struct uart_state *state = drv->state + line;
struct tty_port *port = &state->port;
struct uart_port *uport;
pr_debug("uart_open(%d) called\n", line);
spin_lock_irq(&port->lock);
++port->count;
spin_unlock_irq(&port->lock);
/*
* We take the semaphore here to guarantee that we won't be re-entered
* while allocating the state structure, or while we request any IRQs
* that the driver may need. This also has the nice side-effect that
* it delays the action of uart_hangup, so we can guarantee that
* state->port.tty will always contain something reasonable.
*/
if (mutex_lock_interruptible(&port->mutex)) {
retval = -ERESTARTSYS;
goto end;
}
uport = uart_port_check(state);
if (!uport || uport->flags & UPF_DEAD) {
retval = -ENXIO;
goto err_unlock;
}
tty->driver_data = state;
uport->state = state;
retval = tty_port_open(&state->port, tty, filp);
if (retval > 0)
retval = 0;
return retval;
}
static int uart_port_activate(struct tty_port *port, struct tty_struct *tty)
{
struct uart_state *state = container_of(port, struct uart_state, port);
struct uart_port *uport;
uport = uart_port_check(state);
if (!uport || uport->flags & UPF_DEAD)
return -ENXIO;
port->low_latency = (uport->flags & UPF_LOW_LATENCY) ? 1 : 0;
tty_port_tty_set(port, tty);
/*
* Start up the serial port.
*/
retval = uart_startup(tty, state, 0);
/*
* If we succeeded, wait until the port is ready.
*/
err_unlock:
mutex_unlock(&port->mutex);
if (retval == 0)
retval = tty_port_block_til_ready(port, tty, filp);
end:
return retval;
return uart_startup(tty, state, 0);
}
static const char *uart_type(struct uart_port *port)
@ -1940,7 +1894,7 @@ uart_get_console(struct uart_port *ports, int nr, struct console *co)
*
* Returns 0 on success or -EINVAL on failure
*/
int uart_parse_earlycon(char *p, unsigned char *iotype, unsigned long *addr,
int uart_parse_earlycon(char *p, unsigned char *iotype, resource_size_t *addr,
char **options)
{
if (strncmp(p, "mmio,", 5) == 0) {
@ -1968,7 +1922,11 @@ int uart_parse_earlycon(char *p, unsigned char *iotype, unsigned long *addr,
return -EINVAL;
}
*addr = simple_strtoul(p, NULL, 0);
/*
* Before you replace it with kstrtoull(), think about options separator
* (',') it will not tolerate
*/
*addr = simple_strtoull(p, NULL, 0);
p = strchr(p, ',');
if (p)
p++;
@ -2470,6 +2428,8 @@ static const struct tty_operations uart_ops = {
static const struct tty_port_operations uart_port_ops = {
.carrier_raised = uart_carrier_raised,
.dtr_rts = uart_dtr_rts,
.activate = uart_port_activate,
.shutdown = uart_tty_port_shutdown,
};
/**
@ -2786,6 +2746,8 @@ int uart_add_one_port(struct uart_driver *drv, struct uart_port *uport)
uport->cons = drv->cons;
uport->minor = drv->tty_driver->minor_start + uport->line;
port->console = uart_console(uport);
/*
* If this port is a console, then the spinlock is already
* initialised.

View file

@ -2533,7 +2533,7 @@ static int sci_verify_port(struct uart_port *port, struct serial_struct *ser)
return 0;
}
static struct uart_ops sci_uart_ops = {
static const struct uart_ops sci_uart_ops = {
.tx_empty = sci_tx_empty,
.set_mctrl = sci_set_mctrl,
.get_mctrl = sci_get_mctrl,

View file

@ -639,7 +639,7 @@ static void asc_put_poll_char(struct uart_port *port, unsigned char c)
/*---------------------------------------------------------------------*/
static struct uart_ops asc_uart_ops = {
static const struct uart_ops asc_uart_ops = {
.tx_empty = asc_tx_empty,
.set_mctrl = asc_set_mctrl,
.get_mctrl = asc_get_mctrl,

View file

@ -1,6 +1,7 @@
/*
* Copyright (C) Maxime Coquelin 2015
* Author: Maxime Coquelin <mcoquelin.stm32@gmail.com>
* Authors: Maxime Coquelin <mcoquelin.stm32@gmail.com>
* Gerald Baeza <gerald.baeza@st.com>
* License terms: GNU General Public License (GPL), version 2
*
* Inspired by st-asc.c from STMicroelectronics (c)
@ -10,120 +11,31 @@
#define SUPPORT_SYSRQ
#endif
#include <linux/module.h>
#include <linux/serial.h>
#include <linux/clk.h>
#include <linux/console.h>
#include <linux/sysrq.h>
#include <linux/platform_device.h>
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/tty.h>
#include <linux/tty_flip.h>
#include <linux/delay.h>
#include <linux/spinlock.h>
#include <linux/pm_runtime.h>
#include <linux/dma-direction.h>
#include <linux/dmaengine.h>
#include <linux/dma-mapping.h>
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/irq.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/serial_core.h>
#include <linux/clk.h>
#include <linux/serial.h>
#include <linux/spinlock.h>
#include <linux/sysrq.h>
#include <linux/tty_flip.h>
#include <linux/tty.h>
#define DRIVER_NAME "stm32-usart"
/* Register offsets */
#define USART_SR 0x00
#define USART_DR 0x04
#define USART_BRR 0x08
#define USART_CR1 0x0c
#define USART_CR2 0x10
#define USART_CR3 0x14
#define USART_GTPR 0x18
/* USART_SR */
#define USART_SR_PE BIT(0)
#define USART_SR_FE BIT(1)
#define USART_SR_NF BIT(2)
#define USART_SR_ORE BIT(3)
#define USART_SR_IDLE BIT(4)
#define USART_SR_RXNE BIT(5)
#define USART_SR_TC BIT(6)
#define USART_SR_TXE BIT(7)
#define USART_SR_LBD BIT(8)
#define USART_SR_CTS BIT(9)
#define USART_SR_ERR_MASK (USART_SR_LBD | USART_SR_ORE | \
USART_SR_FE | USART_SR_PE)
/* Dummy bits */
#define USART_SR_DUMMY_RX BIT(16)
/* USART_DR */
#define USART_DR_MASK GENMASK(8, 0)
/* USART_BRR */
#define USART_BRR_DIV_F_MASK GENMASK(3, 0)
#define USART_BRR_DIV_M_MASK GENMASK(15, 4)
#define USART_BRR_DIV_M_SHIFT 4
/* USART_CR1 */
#define USART_CR1_SBK BIT(0)
#define USART_CR1_RWU BIT(1)
#define USART_CR1_RE BIT(2)
#define USART_CR1_TE BIT(3)
#define USART_CR1_IDLEIE BIT(4)
#define USART_CR1_RXNEIE BIT(5)
#define USART_CR1_TCIE BIT(6)
#define USART_CR1_TXEIE BIT(7)
#define USART_CR1_PEIE BIT(8)
#define USART_CR1_PS BIT(9)
#define USART_CR1_PCE BIT(10)
#define USART_CR1_WAKE BIT(11)
#define USART_CR1_M BIT(12)
#define USART_CR1_UE BIT(13)
#define USART_CR1_OVER8 BIT(15)
#define USART_CR1_IE_MASK GENMASK(8, 4)
/* USART_CR2 */
#define USART_CR2_ADD_MASK GENMASK(3, 0)
#define USART_CR2_LBDL BIT(5)
#define USART_CR2_LBDIE BIT(6)
#define USART_CR2_LBCL BIT(8)
#define USART_CR2_CPHA BIT(9)
#define USART_CR2_CPOL BIT(10)
#define USART_CR2_CLKEN BIT(11)
#define USART_CR2_STOP_2B BIT(13)
#define USART_CR2_STOP_MASK GENMASK(13, 12)
#define USART_CR2_LINEN BIT(14)
/* USART_CR3 */
#define USART_CR3_EIE BIT(0)
#define USART_CR3_IREN BIT(1)
#define USART_CR3_IRLP BIT(2)
#define USART_CR3_HDSEL BIT(3)
#define USART_CR3_NACK BIT(4)
#define USART_CR3_SCEN BIT(5)
#define USART_CR3_DMAR BIT(6)
#define USART_CR3_DMAT BIT(7)
#define USART_CR3_RTSE BIT(8)
#define USART_CR3_CTSE BIT(9)
#define USART_CR3_CTSIE BIT(10)
#define USART_CR3_ONEBIT BIT(11)
/* USART_GTPR */
#define USART_GTPR_PSC_MASK GENMASK(7, 0)
#define USART_GTPR_GT_MASK GENMASK(15, 8)
#define DRIVER_NAME "stm32-usart"
#define STM32_SERIAL_NAME "ttyS"
#define STM32_MAX_PORTS 6
struct stm32_port {
struct uart_port port;
struct clk *clk;
bool hw_flow_control;
};
static struct stm32_port stm32_ports[STM32_MAX_PORTS];
static struct uart_driver stm32_usart_driver;
#include "stm32-usart.h"
static void stm32_stop_tx(struct uart_port *port);
static void stm32_transmit_chars(struct uart_port *port);
static inline struct stm32_port *to_stm32_port(struct uart_port *port)
{
@ -148,19 +60,64 @@ static void stm32_clr_bits(struct uart_port *port, u32 reg, u32 bits)
writel_relaxed(val, port->membase + reg);
}
static void stm32_receive_chars(struct uart_port *port)
static int stm32_pending_rx(struct uart_port *port, u32 *sr, int *last_res,
bool threaded)
{
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
enum dma_status status;
struct dma_tx_state state;
*sr = readl_relaxed(port->membase + ofs->isr);
if (threaded && stm32_port->rx_ch) {
status = dmaengine_tx_status(stm32_port->rx_ch,
stm32_port->rx_ch->cookie,
&state);
if ((status == DMA_IN_PROGRESS) &&
(*last_res != state.residue))
return 1;
else
return 0;
} else if (*sr & USART_SR_RXNE) {
return 1;
}
return 0;
}
static unsigned long
stm32_get_char(struct uart_port *port, u32 *sr, int *last_res)
{
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
unsigned long c;
if (stm32_port->rx_ch) {
c = stm32_port->rx_buf[RX_BUF_L - (*last_res)--];
if ((*last_res) == 0)
*last_res = RX_BUF_L;
return c;
} else {
return readl_relaxed(port->membase + ofs->rdr);
}
}
static void stm32_receive_chars(struct uart_port *port, bool threaded)
{
struct tty_port *tport = &port->state->port;
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
unsigned long c;
u32 sr;
char flag;
static int last_res = RX_BUF_L;
if (port->irq_wake)
pm_wakeup_event(tport->tty->dev, 0);
while ((sr = readl_relaxed(port->membase + USART_SR)) & USART_SR_RXNE) {
while (stm32_pending_rx(port, &sr, &last_res, threaded)) {
sr |= USART_SR_DUMMY_RX;
c = readl_relaxed(port->membase + USART_DR);
c = stm32_get_char(port, &sr, &last_res);
flag = TTY_NORMAL;
port->icount.rx++;
@ -170,6 +127,10 @@ static void stm32_receive_chars(struct uart_port *port)
if (uart_handle_break(port))
continue;
} else if (sr & USART_SR_ORE) {
if (ofs->icr != UNDEF_REG)
writel_relaxed(USART_ICR_ORECF,
port->membase +
ofs->icr);
port->icount.overrun++;
} else if (sr & USART_SR_PE) {
port->icount.parity++;
@ -197,14 +158,138 @@ static void stm32_receive_chars(struct uart_port *port)
spin_lock(&port->lock);
}
static void stm32_tx_dma_complete(void *arg)
{
struct uart_port *port = arg;
struct stm32_port *stm32port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
unsigned int isr;
int ret;
ret = readl_relaxed_poll_timeout_atomic(port->membase + ofs->isr,
isr,
(isr & USART_SR_TC),
10, 100000);
if (ret)
dev_err(port->dev, "terminal count not set\n");
if (ofs->icr == UNDEF_REG)
stm32_clr_bits(port, ofs->isr, USART_SR_TC);
else
stm32_set_bits(port, ofs->icr, USART_CR_TC);
stm32_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
stm32port->tx_dma_busy = false;
/* Let's see if we have pending data to send */
stm32_transmit_chars(port);
}
static void stm32_transmit_chars_pio(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
struct circ_buf *xmit = &port->state->xmit;
unsigned int isr;
int ret;
if (stm32_port->tx_dma_busy) {
stm32_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
stm32_port->tx_dma_busy = false;
}
ret = readl_relaxed_poll_timeout_atomic(port->membase + ofs->isr,
isr,
(isr & USART_SR_TXE),
10, 100);
if (ret)
dev_err(port->dev, "tx empty not set\n");
stm32_set_bits(port, ofs->cr1, USART_CR1_TXEIE);
writel_relaxed(xmit->buf[xmit->tail], port->membase + ofs->tdr);
xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
port->icount.tx++;
}
static void stm32_transmit_chars_dma(struct uart_port *port)
{
struct stm32_port *stm32port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
struct circ_buf *xmit = &port->state->xmit;
struct dma_async_tx_descriptor *desc = NULL;
dma_cookie_t cookie;
unsigned int count, i;
if (stm32port->tx_dma_busy)
return;
stm32port->tx_dma_busy = true;
count = uart_circ_chars_pending(xmit);
if (count > TX_BUF_L)
count = TX_BUF_L;
if (xmit->tail < xmit->head) {
memcpy(&stm32port->tx_buf[0], &xmit->buf[xmit->tail], count);
} else {
size_t one = UART_XMIT_SIZE - xmit->tail;
size_t two;
if (one > count)
one = count;
two = count - one;
memcpy(&stm32port->tx_buf[0], &xmit->buf[xmit->tail], one);
if (two)
memcpy(&stm32port->tx_buf[one], &xmit->buf[0], two);
}
desc = dmaengine_prep_slave_single(stm32port->tx_ch,
stm32port->tx_dma_buf,
count,
DMA_MEM_TO_DEV,
DMA_PREP_INTERRUPT);
if (!desc) {
for (i = count; i > 0; i--)
stm32_transmit_chars_pio(port);
return;
}
desc->callback = stm32_tx_dma_complete;
desc->callback_param = port;
/* Push current DMA TX transaction in the pending queue */
cookie = dmaengine_submit(desc);
/* Issue pending DMA TX requests */
dma_async_issue_pending(stm32port->tx_ch);
stm32_clr_bits(port, ofs->isr, USART_SR_TC);
stm32_set_bits(port, ofs->cr3, USART_CR3_DMAT);
xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1);
port->icount.tx += count;
}
static void stm32_transmit_chars(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
struct circ_buf *xmit = &port->state->xmit;
if (port->x_char) {
writel_relaxed(port->x_char, port->membase + USART_DR);
if (stm32_port->tx_dma_busy)
stm32_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
writel_relaxed(port->x_char, port->membase + ofs->tdr);
port->x_char = 0;
port->icount.tx++;
if (stm32_port->tx_dma_busy)
stm32_set_bits(port, ofs->cr3, USART_CR3_DMAT);
return;
}
@ -218,9 +303,10 @@ static void stm32_transmit_chars(struct uart_port *port)
return;
}
writel_relaxed(xmit->buf[xmit->tail], port->membase + USART_DR);
xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
port->icount.tx++;
if (stm32_port->tx_ch)
stm32_transmit_chars_dma(port);
else
stm32_transmit_chars_pio(port);
if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
uart_write_wakeup(port);
@ -232,34 +318,60 @@ static void stm32_transmit_chars(struct uart_port *port)
static irqreturn_t stm32_interrupt(int irq, void *ptr)
{
struct uart_port *port = ptr;
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
u32 sr;
spin_lock(&port->lock);
sr = readl_relaxed(port->membase + USART_SR);
sr = readl_relaxed(port->membase + ofs->isr);
if (sr & USART_SR_RXNE)
stm32_receive_chars(port);
if ((sr & USART_SR_RXNE) && !(stm32_port->rx_ch))
stm32_receive_chars(port, false);
if (sr & USART_SR_TXE)
if ((sr & USART_SR_TXE) && !(stm32_port->tx_ch))
stm32_transmit_chars(port);
spin_unlock(&port->lock);
if (stm32_port->rx_ch)
return IRQ_WAKE_THREAD;
else
return IRQ_HANDLED;
}
static irqreturn_t stm32_threaded_interrupt(int irq, void *ptr)
{
struct uart_port *port = ptr;
struct stm32_port *stm32_port = to_stm32_port(port);
spin_lock(&port->lock);
if (stm32_port->rx_ch)
stm32_receive_chars(port, true);
spin_unlock(&port->lock);
return IRQ_HANDLED;
}
static unsigned int stm32_tx_empty(struct uart_port *port)
{
return readl_relaxed(port->membase + USART_SR) & USART_SR_TXE;
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
return readl_relaxed(port->membase + ofs->isr) & USART_SR_TXE;
}
static void stm32_set_mctrl(struct uart_port *port, unsigned int mctrl)
{
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
if ((mctrl & TIOCM_RTS) && (port->status & UPSTAT_AUTORTS))
stm32_set_bits(port, USART_CR3, USART_CR3_RTSE);
stm32_set_bits(port, ofs->cr3, USART_CR3_RTSE);
else
stm32_clr_bits(port, USART_CR3, USART_CR3_RTSE);
stm32_clr_bits(port, ofs->cr3, USART_CR3_RTSE);
}
static unsigned int stm32_get_mctrl(struct uart_port *port)
@ -271,7 +383,10 @@ static unsigned int stm32_get_mctrl(struct uart_port *port)
/* Transmit stop */
static void stm32_stop_tx(struct uart_port *port)
{
stm32_clr_bits(port, USART_CR1, USART_CR1_TXEIE);
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
stm32_clr_bits(port, ofs->cr1, USART_CR1_TXEIE);
}
/* There are probably characters waiting to be transmitted. */
@ -282,33 +397,40 @@ static void stm32_start_tx(struct uart_port *port)
if (uart_circ_empty(xmit))
return;
stm32_set_bits(port, USART_CR1, USART_CR1_TXEIE | USART_CR1_TE);
stm32_transmit_chars(port);
}
/* Throttle the remote when input buffer is about to overflow. */
static void stm32_throttle(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
unsigned long flags;
spin_lock_irqsave(&port->lock, flags);
stm32_clr_bits(port, USART_CR1, USART_CR1_RXNEIE);
stm32_clr_bits(port, ofs->cr1, USART_CR1_RXNEIE);
spin_unlock_irqrestore(&port->lock, flags);
}
/* Unthrottle the remote, the input buffer can now accept data. */
static void stm32_unthrottle(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
unsigned long flags;
spin_lock_irqsave(&port->lock, flags);
stm32_set_bits(port, USART_CR1, USART_CR1_RXNEIE);
stm32_set_bits(port, ofs->cr1, USART_CR1_RXNEIE);
spin_unlock_irqrestore(&port->lock, flags);
}
/* Receive stop */
static void stm32_stop_rx(struct uart_port *port)
{
stm32_clr_bits(port, USART_CR1, USART_CR1_RXNEIE);
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
stm32_clr_bits(port, ofs->cr1, USART_CR1_RXNEIE);
}
/* Handle breaks - ignored by us */
@ -318,26 +440,34 @@ static void stm32_break_ctl(struct uart_port *port, int break_state)
static int stm32_startup(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
const char *name = to_platform_device(port->dev)->name;
u32 val;
int ret;
ret = request_irq(port->irq, stm32_interrupt, 0, name, port);
ret = request_threaded_irq(port->irq, stm32_interrupt,
stm32_threaded_interrupt,
IRQF_NO_SUSPEND, name, port);
if (ret)
return ret;
val = USART_CR1_RXNEIE | USART_CR1_TE | USART_CR1_RE;
stm32_set_bits(port, USART_CR1, val);
stm32_set_bits(port, ofs->cr1, val);
return 0;
}
static void stm32_shutdown(struct uart_port *port)
{
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
struct stm32_usart_config *cfg = &stm32_port->info->cfg;
u32 val;
val = USART_CR1_TXEIE | USART_CR1_RXNEIE | USART_CR1_TE | USART_CR1_RE;
stm32_set_bits(port, USART_CR1, val);
val |= BIT(cfg->uart_enable_bit);
stm32_clr_bits(port, ofs->cr1, val);
free_irq(port->irq, port);
}
@ -346,6 +476,8 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
struct ktermios *old)
{
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
struct stm32_usart_config *cfg = &stm32_port->info->cfg;
unsigned int baud;
u32 usartdiv, mantissa, fraction, oversampling;
tcflag_t cflag = termios->c_cflag;
@ -360,9 +492,10 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
spin_lock_irqsave(&port->lock, flags);
/* Stop serial port and reset value */
writel_relaxed(0, port->membase + USART_CR1);
writel_relaxed(0, port->membase + ofs->cr1);
cr1 = USART_CR1_TE | USART_CR1_RE | USART_CR1_UE | USART_CR1_RXNEIE;
cr1 = USART_CR1_TE | USART_CR1_RE | USART_CR1_RXNEIE;
cr1 |= BIT(cfg->uart_enable_bit);
cr2 = 0;
cr3 = 0;
@ -371,8 +504,12 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
if (cflag & PARENB) {
cr1 |= USART_CR1_PCE;
if ((cflag & CSIZE) == CS8)
cr1 |= USART_CR1_M;
if ((cflag & CSIZE) == CS8) {
if (cfg->has_7bits_data)
cr1 |= USART_CR1_M0;
else
cr1 |= USART_CR1_M;
}
}
if (cflag & PARODD)
@ -394,15 +531,15 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
*/
if (usartdiv < 16) {
oversampling = 8;
stm32_set_bits(port, USART_CR1, USART_CR1_OVER8);
stm32_set_bits(port, ofs->cr1, USART_CR1_OVER8);
} else {
oversampling = 16;
stm32_clr_bits(port, USART_CR1, USART_CR1_OVER8);
stm32_clr_bits(port, ofs->cr1, USART_CR1_OVER8);
}
mantissa = (usartdiv / oversampling) << USART_BRR_DIV_M_SHIFT;
fraction = usartdiv % oversampling;
writel_relaxed(mantissa | fraction, port->membase + USART_BRR);
writel_relaxed(mantissa | fraction, port->membase + ofs->brr);
uart_update_timeout(port, cflag, baud);
@ -430,9 +567,12 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios,
if ((termios->c_cflag & CREAD) == 0)
port->ignore_status_mask |= USART_SR_DUMMY_RX;
writel_relaxed(cr3, port->membase + USART_CR3);
writel_relaxed(cr2, port->membase + USART_CR2);
writel_relaxed(cr1, port->membase + USART_CR1);
if (stm32_port->rx_ch)
cr3 |= USART_CR3_DMAR;
writel_relaxed(cr3, port->membase + ofs->cr3);
writel_relaxed(cr2, port->membase + ofs->cr2);
writel_relaxed(cr1, port->membase + ofs->cr1);
spin_unlock_irqrestore(&port->lock, flags);
}
@ -469,6 +609,8 @@ static void stm32_pm(struct uart_port *port, unsigned int state,
{
struct stm32_port *stm32port = container_of(port,
struct stm32_port, port);
struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
struct stm32_usart_config *cfg = &stm32port->info->cfg;
unsigned long flags = 0;
switch (state) {
@ -477,7 +619,7 @@ static void stm32_pm(struct uart_port *port, unsigned int state,
break;
case UART_PM_STATE_OFF:
spin_lock_irqsave(&port->lock, flags);
stm32_clr_bits(port, USART_CR1, USART_CR1_UE);
stm32_clr_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
spin_unlock_irqrestore(&port->lock, flags);
clk_disable_unprepare(stm32port->clk);
break;
@ -539,8 +681,6 @@ static int stm32_init_port(struct stm32_port *stm32port,
if (!stm32port->port.uartclk)
ret = -EINVAL;
clk_disable_unprepare(stm32port->clk);
return ret;
}
@ -560,30 +700,162 @@ static struct stm32_port *stm32_of_get_stm32_port(struct platform_device *pdev)
return NULL;
stm32_ports[id].hw_flow_control = of_property_read_bool(np,
"auto-flow-control");
"st,hw-flow-ctrl");
stm32_ports[id].port.line = id;
return &stm32_ports[id];
}
#ifdef CONFIG_OF
static const struct of_device_id stm32_match[] = {
{ .compatible = "st,stm32-usart", },
{ .compatible = "st,stm32-uart", },
{ .compatible = "st,stm32-usart", .data = &stm32f4_info},
{ .compatible = "st,stm32-uart", .data = &stm32f4_info},
{ .compatible = "st,stm32f7-usart", .data = &stm32f7_info},
{ .compatible = "st,stm32f7-uart", .data = &stm32f7_info},
{},
};
MODULE_DEVICE_TABLE(of, stm32_match);
#endif
static int stm32_of_dma_rx_probe(struct stm32_port *stm32port,
struct platform_device *pdev)
{
struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
struct uart_port *port = &stm32port->port;
struct device *dev = &pdev->dev;
struct dma_slave_config config;
struct dma_async_tx_descriptor *desc = NULL;
dma_cookie_t cookie;
int ret;
/* Request DMA RX channel */
stm32port->rx_ch = dma_request_slave_channel(dev, "rx");
if (!stm32port->rx_ch) {
dev_info(dev, "rx dma alloc failed\n");
return -ENODEV;
}
stm32port->rx_buf = dma_alloc_coherent(&pdev->dev, RX_BUF_L,
&stm32port->rx_dma_buf,
GFP_KERNEL);
if (!stm32port->rx_buf) {
ret = -ENOMEM;
goto alloc_err;
}
/* Configure DMA channel */
memset(&config, 0, sizeof(config));
config.src_addr = port->mapbase + ofs->rdr;
config.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
ret = dmaengine_slave_config(stm32port->rx_ch, &config);
if (ret < 0) {
dev_err(dev, "rx dma channel config failed\n");
ret = -ENODEV;
goto config_err;
}
/* Prepare a DMA cyclic transaction */
desc = dmaengine_prep_dma_cyclic(stm32port->rx_ch,
stm32port->rx_dma_buf,
RX_BUF_L, RX_BUF_P, DMA_DEV_TO_MEM,
DMA_PREP_INTERRUPT);
if (!desc) {
dev_err(dev, "rx dma prep cyclic failed\n");
ret = -ENODEV;
goto config_err;
}
/* No callback as dma buffer is drained on usart interrupt */
desc->callback = NULL;
desc->callback_param = NULL;
/* Push current DMA transaction in the pending queue */
cookie = dmaengine_submit(desc);
/* Issue pending DMA requests */
dma_async_issue_pending(stm32port->rx_ch);
return 0;
config_err:
dma_free_coherent(&pdev->dev,
RX_BUF_L, stm32port->rx_buf,
stm32port->rx_dma_buf);
alloc_err:
dma_release_channel(stm32port->rx_ch);
stm32port->rx_ch = NULL;
return ret;
}
static int stm32_of_dma_tx_probe(struct stm32_port *stm32port,
struct platform_device *pdev)
{
struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
struct uart_port *port = &stm32port->port;
struct device *dev = &pdev->dev;
struct dma_slave_config config;
int ret;
stm32port->tx_dma_busy = false;
/* Request DMA TX channel */
stm32port->tx_ch = dma_request_slave_channel(dev, "tx");
if (!stm32port->tx_ch) {
dev_info(dev, "tx dma alloc failed\n");
return -ENODEV;
}
stm32port->tx_buf = dma_alloc_coherent(&pdev->dev, TX_BUF_L,
&stm32port->tx_dma_buf,
GFP_KERNEL);
if (!stm32port->tx_buf) {
ret = -ENOMEM;
goto alloc_err;
}
/* Configure DMA channel */
memset(&config, 0, sizeof(config));
config.dst_addr = port->mapbase + ofs->tdr;
config.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
ret = dmaengine_slave_config(stm32port->tx_ch, &config);
if (ret < 0) {
dev_err(dev, "tx dma channel config failed\n");
ret = -ENODEV;
goto config_err;
}
return 0;
config_err:
dma_free_coherent(&pdev->dev,
TX_BUF_L, stm32port->tx_buf,
stm32port->tx_dma_buf);
alloc_err:
dma_release_channel(stm32port->tx_ch);
stm32port->tx_ch = NULL;
return ret;
}
static int stm32_serial_probe(struct platform_device *pdev)
{
int ret;
const struct of_device_id *match;
struct stm32_port *stm32port;
int ret;
stm32port = stm32_of_get_stm32_port(pdev);
if (!stm32port)
return -ENODEV;
match = of_match_device(stm32_match, &pdev->dev);
if (match && match->data)
stm32port->info = (struct stm32_usart_info *)match->data;
else
return -EINVAL;
ret = stm32_init_port(stm32port, pdev);
if (ret)
return ret;
@ -592,6 +864,14 @@ static int stm32_serial_probe(struct platform_device *pdev)
if (ret)
return ret;
ret = stm32_of_dma_rx_probe(stm32port, pdev);
if (ret)
dev_info(&pdev->dev, "interrupt mode used for rx (no dma)\n");
ret = stm32_of_dma_tx_probe(stm32port, pdev);
if (ret)
dev_info(&pdev->dev, "interrupt mode used for tx (no dma)\n");
platform_set_drvdata(pdev, &stm32port->port);
return 0;
@ -600,6 +880,30 @@ static int stm32_serial_probe(struct platform_device *pdev)
static int stm32_serial_remove(struct platform_device *pdev)
{
struct uart_port *port = platform_get_drvdata(pdev);
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
stm32_clr_bits(port, ofs->cr3, USART_CR3_DMAR);
if (stm32_port->rx_ch)
dma_release_channel(stm32_port->rx_ch);
if (stm32_port->rx_dma_buf)
dma_free_coherent(&pdev->dev,
RX_BUF_L, stm32_port->rx_buf,
stm32_port->rx_dma_buf);
stm32_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
if (stm32_port->tx_ch)
dma_release_channel(stm32_port->tx_ch);
if (stm32_port->tx_dma_buf)
dma_free_coherent(&pdev->dev,
TX_BUF_L, stm32_port->tx_buf,
stm32_port->tx_dma_buf);
clk_disable_unprepare(stm32_port->clk);
return uart_remove_one_port(&stm32_usart_driver, port);
}
@ -608,15 +912,21 @@ static int stm32_serial_remove(struct platform_device *pdev)
#ifdef CONFIG_SERIAL_STM32_CONSOLE
static void stm32_console_putchar(struct uart_port *port, int ch)
{
while (!(readl_relaxed(port->membase + USART_SR) & USART_SR_TXE))
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
while (!(readl_relaxed(port->membase + ofs->isr) & USART_SR_TXE))
cpu_relax();
writel_relaxed(ch, port->membase + USART_DR);
writel_relaxed(ch, port->membase + ofs->tdr);
}
static void stm32_console_write(struct console *co, const char *s, unsigned cnt)
{
struct uart_port *port = &stm32_ports[co->index].port;
struct stm32_port *stm32_port = to_stm32_port(port);
struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
struct stm32_usart_config *cfg = &stm32_port->info->cfg;
unsigned long flags;
u32 old_cr1, new_cr1;
int locked = 1;
@ -629,15 +939,16 @@ static void stm32_console_write(struct console *co, const char *s, unsigned cnt)
else
spin_lock(&port->lock);
/* Save and disable interrupts */
old_cr1 = readl_relaxed(port->membase + USART_CR1);
/* Save and disable interrupts, enable the transmitter */
old_cr1 = readl_relaxed(port->membase + ofs->cr1);
new_cr1 = old_cr1 & ~USART_CR1_IE_MASK;
writel_relaxed(new_cr1, port->membase + USART_CR1);
new_cr1 |= USART_CR1_TE | BIT(cfg->uart_enable_bit);
writel_relaxed(new_cr1, port->membase + ofs->cr1);
uart_console_write(port, s, cnt, stm32_console_putchar);
/* Restore interrupt state */
writel_relaxed(old_cr1, port->membase + USART_CR1);
writel_relaxed(old_cr1, port->membase + ofs->cr1);
if (locked)
spin_unlock(&port->lock);

View file

@ -0,0 +1,229 @@
/*
* Copyright (C) Maxime Coquelin 2015
* Authors: Maxime Coquelin <mcoquelin.stm32@gmail.com>
* Gerald Baeza <gerald_baeza@yahoo.fr>
* License terms: GNU General Public License (GPL), version 2
*/
#define DRIVER_NAME "stm32-usart"
struct stm32_usart_offsets {
u8 cr1;
u8 cr2;
u8 cr3;
u8 brr;
u8 gtpr;
u8 rtor;
u8 rqr;
u8 isr;
u8 icr;
u8 rdr;
u8 tdr;
};
struct stm32_usart_config {
u8 uart_enable_bit; /* USART_CR1_UE */
bool has_7bits_data;
};
struct stm32_usart_info {
struct stm32_usart_offsets ofs;
struct stm32_usart_config cfg;
};
#define UNDEF_REG ~0
/* Register offsets */
struct stm32_usart_info stm32f4_info = {
.ofs = {
.isr = 0x00,
.rdr = 0x04,
.tdr = 0x04,
.brr = 0x08,
.cr1 = 0x0c,
.cr2 = 0x10,
.cr3 = 0x14,
.gtpr = 0x18,
.rtor = UNDEF_REG,
.rqr = UNDEF_REG,
.icr = UNDEF_REG,
},
.cfg = {
.uart_enable_bit = 13,
.has_7bits_data = false,
}
};
struct stm32_usart_info stm32f7_info = {
.ofs = {
.cr1 = 0x00,
.cr2 = 0x04,
.cr3 = 0x08,
.brr = 0x0c,
.gtpr = 0x10,
.rtor = 0x14,
.rqr = 0x18,
.isr = 0x1c,
.icr = 0x20,
.rdr = 0x24,
.tdr = 0x28,
},
.cfg = {
.uart_enable_bit = 0,
.has_7bits_data = true,
}
};
/* USART_SR (F4) / USART_ISR (F7) */
#define USART_SR_PE BIT(0)
#define USART_SR_FE BIT(1)
#define USART_SR_NF BIT(2)
#define USART_SR_ORE BIT(3)
#define USART_SR_IDLE BIT(4)
#define USART_SR_RXNE BIT(5)
#define USART_SR_TC BIT(6)
#define USART_SR_TXE BIT(7)
#define USART_SR_LBD BIT(8)
#define USART_SR_CTSIF BIT(9)
#define USART_SR_CTS BIT(10) /* F7 */
#define USART_SR_RTOF BIT(11) /* F7 */
#define USART_SR_EOBF BIT(12) /* F7 */
#define USART_SR_ABRE BIT(14) /* F7 */
#define USART_SR_ABRF BIT(15) /* F7 */
#define USART_SR_BUSY BIT(16) /* F7 */
#define USART_SR_CMF BIT(17) /* F7 */
#define USART_SR_SBKF BIT(18) /* F7 */
#define USART_SR_TEACK BIT(21) /* F7 */
#define USART_SR_ERR_MASK (USART_SR_LBD | USART_SR_ORE | \
USART_SR_FE | USART_SR_PE)
/* Dummy bits */
#define USART_SR_DUMMY_RX BIT(16)
/* USART_ICR (F7) */
#define USART_CR_TC BIT(6)
/* USART_DR */
#define USART_DR_MASK GENMASK(8, 0)
/* USART_BRR */
#define USART_BRR_DIV_F_MASK GENMASK(3, 0)
#define USART_BRR_DIV_M_MASK GENMASK(15, 4)
#define USART_BRR_DIV_M_SHIFT 4
/* USART_CR1 */
#define USART_CR1_SBK BIT(0)
#define USART_CR1_RWU BIT(1) /* F4 */
#define USART_CR1_RE BIT(2)
#define USART_CR1_TE BIT(3)
#define USART_CR1_IDLEIE BIT(4)
#define USART_CR1_RXNEIE BIT(5)
#define USART_CR1_TCIE BIT(6)
#define USART_CR1_TXEIE BIT(7)
#define USART_CR1_PEIE BIT(8)
#define USART_CR1_PS BIT(9)
#define USART_CR1_PCE BIT(10)
#define USART_CR1_WAKE BIT(11)
#define USART_CR1_M BIT(12)
#define USART_CR1_M0 BIT(12) /* F7 */
#define USART_CR1_MME BIT(13) /* F7 */
#define USART_CR1_CMIE BIT(14) /* F7 */
#define USART_CR1_OVER8 BIT(15)
#define USART_CR1_DEDT_MASK GENMASK(20, 16) /* F7 */
#define USART_CR1_DEAT_MASK GENMASK(25, 21) /* F7 */
#define USART_CR1_RTOIE BIT(26) /* F7 */
#define USART_CR1_EOBIE BIT(27) /* F7 */
#define USART_CR1_M1 BIT(28) /* F7 */
#define USART_CR1_IE_MASK (GENMASK(8, 4) | BIT(14) | BIT(26) | BIT(27))
/* USART_CR2 */
#define USART_CR2_ADD_MASK GENMASK(3, 0) /* F4 */
#define USART_CR2_ADDM7 BIT(4) /* F7 */
#define USART_CR2_LBDL BIT(5)
#define USART_CR2_LBDIE BIT(6)
#define USART_CR2_LBCL BIT(8)
#define USART_CR2_CPHA BIT(9)
#define USART_CR2_CPOL BIT(10)
#define USART_CR2_CLKEN BIT(11)
#define USART_CR2_STOP_2B BIT(13)
#define USART_CR2_STOP_MASK GENMASK(13, 12)
#define USART_CR2_LINEN BIT(14)
#define USART_CR2_SWAP BIT(15) /* F7 */
#define USART_CR2_RXINV BIT(16) /* F7 */
#define USART_CR2_TXINV BIT(17) /* F7 */
#define USART_CR2_DATAINV BIT(18) /* F7 */
#define USART_CR2_MSBFIRST BIT(19) /* F7 */
#define USART_CR2_ABREN BIT(20) /* F7 */
#define USART_CR2_ABRMOD_MASK GENMASK(22, 21) /* F7 */
#define USART_CR2_RTOEN BIT(23) /* F7 */
#define USART_CR2_ADD_F7_MASK GENMASK(31, 24) /* F7 */
/* USART_CR3 */
#define USART_CR3_EIE BIT(0)
#define USART_CR3_IREN BIT(1)
#define USART_CR3_IRLP BIT(2)
#define USART_CR3_HDSEL BIT(3)
#define USART_CR3_NACK BIT(4)
#define USART_CR3_SCEN BIT(5)
#define USART_CR3_DMAR BIT(6)
#define USART_CR3_DMAT BIT(7)
#define USART_CR3_RTSE BIT(8)
#define USART_CR3_CTSE BIT(9)
#define USART_CR3_CTSIE BIT(10)
#define USART_CR3_ONEBIT BIT(11)
#define USART_CR3_OVRDIS BIT(12) /* F7 */
#define USART_CR3_DDRE BIT(13) /* F7 */
#define USART_CR3_DEM BIT(14) /* F7 */
#define USART_CR3_DEP BIT(15) /* F7 */
#define USART_CR3_SCARCNT_MASK GENMASK(19, 17) /* F7 */
/* USART_GTPR */
#define USART_GTPR_PSC_MASK GENMASK(7, 0)
#define USART_GTPR_GT_MASK GENMASK(15, 8)
/* USART_RTOR */
#define USART_RTOR_RTO_MASK GENMASK(23, 0) /* F7 */
#define USART_RTOR_BLEN_MASK GENMASK(31, 24) /* F7 */
/* USART_RQR */
#define USART_RQR_ABRRQ BIT(0) /* F7 */
#define USART_RQR_SBKRQ BIT(1) /* F7 */
#define USART_RQR_MMRQ BIT(2) /* F7 */
#define USART_RQR_RXFRQ BIT(3) /* F7 */
#define USART_RQR_TXFRQ BIT(4) /* F7 */
/* USART_ICR */
#define USART_ICR_PECF BIT(0) /* F7 */
#define USART_ICR_FFECF BIT(1) /* F7 */
#define USART_ICR_NCF BIT(2) /* F7 */
#define USART_ICR_ORECF BIT(3) /* F7 */
#define USART_ICR_IDLECF BIT(4) /* F7 */
#define USART_ICR_TCCF BIT(6) /* F7 */
#define USART_ICR_LBDCF BIT(8) /* F7 */
#define USART_ICR_CTSCF BIT(9) /* F7 */
#define USART_ICR_RTOCF BIT(11) /* F7 */
#define USART_ICR_EOBCF BIT(12) /* F7 */
#define USART_ICR_CMCF BIT(17) /* F7 */
#define STM32_SERIAL_NAME "ttyS"
#define STM32_MAX_PORTS 6
#define RX_BUF_L 200 /* dma rx buffer length */
#define RX_BUF_P RX_BUF_L /* dma rx buffer period */
#define TX_BUF_L 200 /* dma tx buffer length */
struct stm32_port {
struct uart_port port;
struct clk *clk;
struct stm32_usart_info *info;
struct dma_chan *rx_ch; /* dma rx channel */
dma_addr_t rx_dma_buf; /* dma rx buffer bus address */
unsigned char *rx_buf; /* dma rx buffer cpu address */
struct dma_chan *tx_ch; /* dma tx channel */
dma_addr_t tx_dma_buf; /* dma tx buffer bus address */
unsigned char *tx_buf; /* dma tx buffer cpu address */
bool tx_dma_busy; /* dma tx busy */
bool hw_flow_control;
};
static struct stm32_port stm32_ports[STM32_MAX_PORTS];
static struct uart_driver stm32_usart_driver;

View file

@ -394,7 +394,7 @@ static int timbuart_verify_port(struct uart_port *port,
return -EINVAL;
}
static struct uart_ops timbuart_ops = {
static const struct uart_ops timbuart_ops = {
.tx_empty = timbuart_tx_empty,
.set_mctrl = timbuart_set_mctrl,
.get_mctrl = timbuart_get_mctrl,

View file

@ -387,7 +387,7 @@ static void ulite_put_poll_char(struct uart_port *port, unsigned char ch)
}
#endif
static struct uart_ops ulite_ops = {
static const struct uart_ops ulite_ops = {
.tx_empty = ulite_tx_empty,
.set_mctrl = ulite_set_mctrl,
.get_mctrl = ulite_get_mctrl,

View file

@ -118,7 +118,7 @@ struct vt8500_port {
* have been allocated as we can't use pdev->id in
* devicetree
*/
static unsigned long vt8500_ports_in_use;
static DECLARE_BITMAP(vt8500_ports_in_use, VT8500_MAX_PORTS);
static inline void vt8500_write(struct uart_port *port, unsigned int val,
unsigned int off)
@ -663,15 +663,15 @@ static int vt8500_serial_probe(struct platform_device *pdev)
if (port < 0) {
/* calculate the port id */
port = find_first_zero_bit(&vt8500_ports_in_use,
sizeof(vt8500_ports_in_use));
port = find_first_zero_bit(vt8500_ports_in_use,
VT8500_MAX_PORTS);
}
if (port >= VT8500_MAX_PORTS)
return -ENODEV;
/* reserve the port id */
if (test_and_set_bit(port, &vt8500_ports_in_use)) {
if (test_and_set_bit(port, vt8500_ports_in_use)) {
/* port already in use - shouldn't really happen */
return -EBUSY;
}

View file

@ -57,7 +57,7 @@ MODULE_PARM_DESC(rx_timeout, "Rx timeout, 1-255");
#define CDNS_UART_IMR 0x10 /* Interrupt Mask */
#define CDNS_UART_ISR 0x14 /* Interrupt Status */
#define CDNS_UART_BAUDGEN 0x18 /* Baud Rate Generator */
#define CDNS_UART_RXTOUT 0x1C /* RX Timeout */
#define CDNS_UART_RXTOUT 0x1C /* RX Timeout */
#define CDNS_UART_RXWM 0x20 /* RX FIFO Trigger Level */
#define CDNS_UART_MODEMCR 0x24 /* Modem Control */
#define CDNS_UART_MODEMSR 0x28 /* Modem Status */
@ -68,6 +68,7 @@ MODULE_PARM_DESC(rx_timeout, "Rx timeout, 1-255");
#define CDNS_UART_IRRX_PWIDTH 0x3C /* IR Min Received Pulse Width */
#define CDNS_UART_IRTX_PWIDTH 0x40 /* IR Transmitted pulse Width */
#define CDNS_UART_TXWM 0x44 /* TX FIFO Trigger Level */
#define CDNS_UART_RXBS 0x48 /* RX FIFO byte status register */
/* Control Register Bit Definitions */
#define CDNS_UART_CR_STOPBRK 0x00000100 /* Stop TX break */
@ -79,6 +80,9 @@ MODULE_PARM_DESC(rx_timeout, "Rx timeout, 1-255");
#define CDNS_UART_CR_TXRST 0x00000002 /* TX logic reset */
#define CDNS_UART_CR_RXRST 0x00000001 /* RX logic reset */
#define CDNS_UART_CR_RST_TO 0x00000040 /* Restart Timeout Counter */
#define CDNS_UART_RXBS_PARITY 0x00000001 /* Parity error status */
#define CDNS_UART_RXBS_FRAMING 0x00000002 /* Framing error status */
#define CDNS_UART_RXBS_BRK 0x00000004 /* Overrun error status */
/*
* Mode Register:
@ -126,13 +130,27 @@ MODULE_PARM_DESC(rx_timeout, "Rx timeout, 1-255");
#define CDNS_UART_IXR_RXEMPTY 0x00000002 /* RX FIFO empty interrupt. */
#define CDNS_UART_IXR_MASK 0x00001FFF /* Valid bit mask */
#define CDNS_UART_RX_IRQS (CDNS_UART_IXR_PARITY | CDNS_UART_IXR_FRAMING | \
CDNS_UART_IXR_OVERRUN | CDNS_UART_IXR_RXTRIG | \
/*
* Do not enable parity error interrupt for the following
* reason: When parity error interrupt is enabled, each Rx
* parity error always results in 2 events. The first one
* being parity error interrupt and the second one with a
* proper Rx interrupt with the incoming data. Disabling
* parity error interrupt ensures better handling of parity
* error events. With this change, for a parity error case, we
* get a Rx interrupt with parity error set in ISR register
* and we still handle parity errors in the desired way.
*/
#define CDNS_UART_RX_IRQS (CDNS_UART_IXR_FRAMING | \
CDNS_UART_IXR_OVERRUN | \
CDNS_UART_IXR_RXTRIG | \
CDNS_UART_IXR_TOUT)
/* Goes in read_status_mask for break detection as the HW doesn't do it*/
#define CDNS_UART_IXR_BRK 0x80000000
#define CDNS_UART_IXR_BRK 0x00002000
#define CDNS_UART_RXBS_SUPPORT BIT(1)
/*
* Modem Control register:
* The read/write Modem Control register controls the interface with the modem
@ -172,46 +190,66 @@ struct cdns_uart {
struct clk *pclk;
unsigned int baud;
struct notifier_block clk_rate_change_nb;
u32 quirks;
};
struct cdns_platform_data {
u32 quirks;
};
#define to_cdns_uart(_nb) container_of(_nb, struct cdns_uart, \
clk_rate_change_nb);
static void cdns_uart_handle_rx(struct uart_port *port, unsigned int isrstatus)
/**
* cdns_uart_handle_rx - Handle the received bytes along with Rx errors.
* @dev_id: Id of the UART port
* @isrstatus: The interrupt status register value as read
* Return: None
*/
static void cdns_uart_handle_rx(void *dev_id, unsigned int isrstatus)
{
/*
* There is no hardware break detection, so we interpret framing
* error with all-zeros data as a break sequence. Most of the time,
* there's another non-zero byte at the end of the sequence.
*/
if (isrstatus & CDNS_UART_IXR_FRAMING) {
while (!(readl(port->membase + CDNS_UART_SR) &
CDNS_UART_SR_RXEMPTY)) {
if (!readl(port->membase + CDNS_UART_FIFO)) {
struct uart_port *port = (struct uart_port *)dev_id;
struct cdns_uart *cdns_uart = port->private_data;
unsigned int data;
unsigned int rxbs_status = 0;
unsigned int status_mask;
unsigned int framerrprocessed = 0;
char status = TTY_NORMAL;
bool is_rxbs_support;
is_rxbs_support = cdns_uart->quirks & CDNS_UART_RXBS_SUPPORT;
while ((readl(port->membase + CDNS_UART_SR) &
CDNS_UART_SR_RXEMPTY) != CDNS_UART_SR_RXEMPTY) {
if (is_rxbs_support)
rxbs_status = readl(port->membase + CDNS_UART_RXBS);
data = readl(port->membase + CDNS_UART_FIFO);
port->icount.rx++;
/*
* There is no hardware break detection in Zynq, so we interpret
* framing error with all-zeros data as a break sequence.
* Most of the time, there's another non-zero byte at the
* end of the sequence.
*/
if (!is_rxbs_support && (isrstatus & CDNS_UART_IXR_FRAMING)) {
if (!data) {
port->read_status_mask |= CDNS_UART_IXR_BRK;
isrstatus &= ~CDNS_UART_IXR_FRAMING;
framerrprocessed = 1;
continue;
}
}
writel(CDNS_UART_IXR_FRAMING, port->membase + CDNS_UART_ISR);
}
if (is_rxbs_support && (rxbs_status & CDNS_UART_RXBS_BRK)) {
port->icount.brk++;
status = TTY_BREAK;
if (uart_handle_break(port))
continue;
}
/* drop byte with parity error if IGNPAR specified */
if (isrstatus & port->ignore_status_mask & CDNS_UART_IXR_PARITY)
isrstatus &= ~(CDNS_UART_IXR_RXTRIG | CDNS_UART_IXR_TOUT);
isrstatus &= port->read_status_mask;
isrstatus &= ~port->ignore_status_mask;
status_mask = port->read_status_mask;
status_mask &= ~port->ignore_status_mask;
isrstatus &= port->read_status_mask;
isrstatus &= ~port->ignore_status_mask;
if (!(isrstatus & (CDNS_UART_IXR_TOUT | CDNS_UART_IXR_RXTRIG)))
return;
while (!(readl(port->membase + CDNS_UART_SR) & CDNS_UART_SR_RXEMPTY)) {
u32 data;
char status = TTY_NORMAL;
data = readl(port->membase + CDNS_UART_FIFO);
/* Non-NULL byte after BREAK is garbage (99%) */
if (data && (port->read_status_mask & CDNS_UART_IXR_BRK)) {
if (data &&
(port->read_status_mask & CDNS_UART_IXR_BRK)) {
port->read_status_mask &= ~CDNS_UART_IXR_BRK;
port->icount.brk++;
if (uart_handle_break(port))
@ -221,57 +259,83 @@ static void cdns_uart_handle_rx(struct uart_port *port, unsigned int isrstatus)
if (uart_handle_sysrq_char(port, data))
continue;
port->icount.rx++;
if (isrstatus & CDNS_UART_IXR_PARITY) {
port->icount.parity++;
status = TTY_PARITY;
} else if (isrstatus & CDNS_UART_IXR_FRAMING) {
port->icount.frame++;
status = TTY_FRAME;
} else if (isrstatus & CDNS_UART_IXR_OVERRUN) {
port->icount.overrun++;
if (is_rxbs_support) {
if ((rxbs_status & CDNS_UART_RXBS_PARITY)
&& (status_mask & CDNS_UART_IXR_PARITY)) {
port->icount.parity++;
status = TTY_PARITY;
}
if ((rxbs_status & CDNS_UART_RXBS_FRAMING)
&& (status_mask & CDNS_UART_IXR_PARITY)) {
port->icount.frame++;
status = TTY_FRAME;
}
} else {
if (isrstatus & CDNS_UART_IXR_PARITY) {
port->icount.parity++;
status = TTY_PARITY;
}
if ((isrstatus & CDNS_UART_IXR_FRAMING) &&
!framerrprocessed) {
port->icount.frame++;
status = TTY_FRAME;
}
}
uart_insert_char(port, isrstatus, CDNS_UART_IXR_OVERRUN,
data, status);
if (isrstatus & CDNS_UART_IXR_OVERRUN) {
port->icount.overrun++;
tty_insert_flip_char(&port->state->port, 0,
TTY_OVERRUN);
}
tty_insert_flip_char(&port->state->port, data, status);
isrstatus = 0;
}
spin_unlock(&port->lock);
tty_flip_buffer_push(&port->state->port);
spin_lock(&port->lock);
}
static void cdns_uart_handle_tx(struct uart_port *port)
/**
* cdns_uart_handle_tx - Handle the bytes to be Txed.
* @dev_id: Id of the UART port
* Return: None
*/
static void cdns_uart_handle_tx(void *dev_id)
{
struct uart_port *port = (struct uart_port *)dev_id;
unsigned int numbytes;
if (uart_circ_empty(&port->state->xmit)) {
writel(CDNS_UART_IXR_TXEMPTY, port->membase + CDNS_UART_IDR);
return;
} else {
numbytes = port->fifosize;
while (numbytes && !uart_circ_empty(&port->state->xmit) &&
!(readl(port->membase + CDNS_UART_SR) & CDNS_UART_SR_TXFULL)) {
/*
* Get the data from the UART circular buffer
* and write it to the cdns_uart's TX_FIFO
* register.
*/
writel(
port->state->xmit.buf[port->state->xmit.
tail], port->membase + CDNS_UART_FIFO);
port->icount.tx++;
/*
* Adjust the tail of the UART buffer and wrap
* the buffer if it reaches limit.
*/
port->state->xmit.tail =
(port->state->xmit.tail + 1) &
(UART_XMIT_SIZE - 1);
numbytes--;
}
if (uart_circ_chars_pending(
&port->state->xmit) < WAKEUP_CHARS)
uart_write_wakeup(port);
}
numbytes = port->fifosize;
while (numbytes && !uart_circ_empty(&port->state->xmit) &&
!(readl(port->membase + CDNS_UART_SR) & CDNS_UART_SR_TXFULL)) {
/*
* Get the data from the UART circular buffer
* and write it to the cdns_uart's TX_FIFO
* register.
*/
writel(port->state->xmit.buf[port->state->xmit.tail],
port->membase + CDNS_UART_FIFO);
port->icount.tx++;
/*
* Adjust the tail of the UART buffer and wrap
* the buffer if it reaches limit.
*/
port->state->xmit.tail =
(port->state->xmit.tail + 1) & (UART_XMIT_SIZE - 1);
numbytes--;
}
if (uart_circ_chars_pending(&port->state->xmit) < WAKEUP_CHARS)
uart_write_wakeup(port);
}
/**
@ -284,27 +348,24 @@ static void cdns_uart_handle_tx(struct uart_port *port)
static irqreturn_t cdns_uart_isr(int irq, void *dev_id)
{
struct uart_port *port = (struct uart_port *)dev_id;
unsigned long flags;
unsigned int isrstatus;
spin_lock_irqsave(&port->lock, flags);
spin_lock(&port->lock);
/* Read the interrupt status register to determine which
* interrupt(s) is/are active.
* interrupt(s) is/are active and clear them.
*/
isrstatus = readl(port->membase + CDNS_UART_ISR);
if (isrstatus & CDNS_UART_RX_IRQS)
cdns_uart_handle_rx(port, isrstatus);
if ((isrstatus & CDNS_UART_IXR_TXEMPTY) == CDNS_UART_IXR_TXEMPTY)
cdns_uart_handle_tx(port);
writel(isrstatus, port->membase + CDNS_UART_ISR);
/* be sure to release the lock and tty before leaving */
spin_unlock_irqrestore(&port->lock, flags);
if (isrstatus & CDNS_UART_IXR_TXEMPTY) {
cdns_uart_handle_tx(dev_id);
isrstatus &= ~CDNS_UART_IXR_TXEMPTY;
}
if (isrstatus & CDNS_UART_IXR_MASK)
cdns_uart_handle_rx(dev_id, isrstatus);
spin_unlock(&port->lock);
return IRQ_HANDLED;
}
@ -653,6 +714,10 @@ static void cdns_uart_set_termios(struct uart_port *port,
ctrl_reg |= CDNS_UART_CR_TXRST | CDNS_UART_CR_RXRST;
writel(ctrl_reg, port->membase + CDNS_UART_CR);
while (readl(port->membase + CDNS_UART_CR) &
(CDNS_UART_CR_TXRST | CDNS_UART_CR_RXRST))
cpu_relax();
/*
* Clear the RX disable and TX disable bits and then set the TX enable
* bit and RX enable bit to enable the transmitter and receiver.
@ -736,10 +801,14 @@ static void cdns_uart_set_termios(struct uart_port *port,
*/
static int cdns_uart_startup(struct uart_port *port)
{
struct cdns_uart *cdns_uart = port->private_data;
bool is_brk_support;
int ret;
unsigned long flags;
unsigned int status = 0;
is_brk_support = cdns_uart->quirks & CDNS_UART_RXBS_SUPPORT;
spin_lock_irqsave(&port->lock, flags);
/* Disable the TX and RX */
@ -752,6 +821,10 @@ static int cdns_uart_startup(struct uart_port *port)
writel(CDNS_UART_CR_TXRST | CDNS_UART_CR_RXRST,
port->membase + CDNS_UART_CR);
while (readl(port->membase + CDNS_UART_CR) &
(CDNS_UART_CR_TXRST | CDNS_UART_CR_RXRST))
cpu_relax();
/*
* Clear the RX disable bit and then set the RX enable bit to enable
* the receiver.
@ -794,7 +867,11 @@ static int cdns_uart_startup(struct uart_port *port)
}
/* Set the Interrupt Registers with desired interrupts */
writel(CDNS_UART_RX_IRQS, port->membase + CDNS_UART_IER);
if (is_brk_support)
writel(CDNS_UART_RX_IRQS | CDNS_UART_IXR_BRK,
port->membase + CDNS_UART_IER);
else
writel(CDNS_UART_RX_IRQS, port->membase + CDNS_UART_IER);
return 0;
}
@ -993,7 +1070,7 @@ static void cdns_uart_pm(struct uart_port *port, unsigned int state,
}
}
static struct uart_ops cdns_uart_ops = {
static const struct uart_ops cdns_uart_ops = {
.set_mctrl = cdns_uart_set_mctrl,
.get_mctrl = cdns_uart_get_mctrl,
.start_tx = cdns_uart_start_tx,
@ -1088,9 +1165,34 @@ static void __init cdns_early_write(struct console *con, const char *s,
static int __init cdns_early_console_setup(struct earlycon_device *device,
const char *opt)
{
if (!device->port.membase)
struct uart_port *port = &device->port;
if (!port->membase)
return -ENODEV;
/* initialise control register */
writel(CDNS_UART_CR_TX_EN|CDNS_UART_CR_TXRST|CDNS_UART_CR_RXRST,
port->membase + CDNS_UART_CR);
/* only set baud if specified on command line - otherwise
* assume it has been initialized by a boot loader.
*/
if (device->baud) {
u32 cd = 0, bdiv = 0;
u32 mr;
int div8;
cdns_uart_calc_baud_divs(port->uartclk, device->baud,
&bdiv, &cd, &div8);
mr = CDNS_UART_MR_PARITY_NONE;
if (div8)
mr |= CDNS_UART_MR_CLKSEL;
writel(mr, port->membase + CDNS_UART_MR);
writel(cd, port->membase + CDNS_UART_BAUDGEN);
writel(bdiv, port->membase + CDNS_UART_BAUDDIV);
}
device->con->write = cdns_early_write;
return 0;
@ -1328,6 +1430,18 @@ static int cdns_uart_resume(struct device *device)
static SIMPLE_DEV_PM_OPS(cdns_uart_dev_pm_ops, cdns_uart_suspend,
cdns_uart_resume);
static const struct cdns_platform_data zynqmp_uart_def = {
.quirks = CDNS_UART_RXBS_SUPPORT, };
/* Match table for of_platform binding */
static const struct of_device_id cdns_uart_of_match[] = {
{ .compatible = "xlnx,xuartps", },
{ .compatible = "cdns,uart-r1p8", },
{ .compatible = "cdns,uart-r1p12", .data = &zynqmp_uart_def },
{}
};
MODULE_DEVICE_TABLE(of, cdns_uart_of_match);
/**
* cdns_uart_probe - Platform driver probe
* @pdev: Pointer to the platform device structure
@ -1340,12 +1454,20 @@ static int cdns_uart_probe(struct platform_device *pdev)
struct uart_port *port;
struct resource *res;
struct cdns_uart *cdns_uart_data;
const struct of_device_id *match;
cdns_uart_data = devm_kzalloc(&pdev->dev, sizeof(*cdns_uart_data),
GFP_KERNEL);
if (!cdns_uart_data)
return -ENOMEM;
match = of_match_node(cdns_uart_of_match, pdev->dev.of_node);
if (match && match->data) {
const struct cdns_platform_data *data = match->data;
cdns_uart_data->quirks = data->quirks;
}
cdns_uart_data->pclk = devm_clk_get(&pdev->dev, "pclk");
if (IS_ERR(cdns_uart_data->pclk)) {
cdns_uart_data->pclk = devm_clk_get(&pdev->dev, "aper_clk");
@ -1471,14 +1593,6 @@ static int cdns_uart_remove(struct platform_device *pdev)
return rc;
}
/* Match table for of_platform binding */
static const struct of_device_id cdns_uart_of_match[] = {
{ .compatible = "xlnx,xuartps", },
{ .compatible = "cdns,uart-r1p8", },
{}
};
MODULE_DEVICE_TABLE(of, cdns_uart_of_match);
static struct platform_driver cdns_uart_platform_driver = {
.probe = cdns_uart_probe,
.remove = cdns_uart_remove,

View file

@ -1312,12 +1312,12 @@ static int vc_t416_color(struct vc_data *vc, int i,
if (i > vc->vc_npar)
return i;
if (vc->vc_par[i] == 5 && i < vc->vc_npar) {
/* 256 colours -- ubiquitous */
if (vc->vc_par[i] == 5 && i + 1 <= vc->vc_npar) {
/* 256 colours */
i++;
rgb_from_256(vc->vc_par[i], &c);
} else if (vc->vc_par[i] == 2 && i <= vc->vc_npar + 3) {
/* 24 bit -- extremely rare */
} else if (vc->vc_par[i] == 2 && i + 3 <= vc->vc_npar) {
/* 24 bit */
c.r = vc->vc_par[i + 1];
c.g = vc->vc_par[i + 2];
c.b = vc->vc_par[i + 3];
@ -1415,6 +1415,11 @@ static void csi_m(struct vc_data *vc)
(vc->vc_color & 0x0f);
break;
default:
if (vc->vc_par[i] >= 90 && vc->vc_par[i] <= 107) {
if (vc->vc_par[i] < 100)
vc->vc_intensity = 2;
vc->vc_par[i] -= 60;
}
if (vc->vc_par[i] >= 30 && vc->vc_par[i] <= 37)
vc->vc_color = color_table[vc->vc_par[i] - 30]
| (vc->vc_color & 0xf0);

View file

@ -272,13 +272,8 @@ static int mknod_ptmx(struct super_block *sb)
struct dentry *root = sb->s_root;
struct pts_fs_info *fsi = DEVPTS_SB(sb);
struct pts_mount_opts *opts = &fsi->mount_opts;
kuid_t root_uid;
kgid_t root_gid;
root_uid = make_kuid(current_user_ns(), 0);
root_gid = make_kgid(current_user_ns(), 0);
if (!uid_valid(root_uid) || !gid_valid(root_gid))
return -EINVAL;
kuid_t ptmx_uid = current_fsuid();
kgid_t ptmx_gid = current_fsgid();
inode_lock(d_inode(root));
@ -309,8 +304,8 @@ static int mknod_ptmx(struct super_block *sb)
mode = S_IFCHR|opts->ptmxmode;
init_special_inode(inode, mode, MKDEV(TTYAUX_MAJOR, 2));
inode->i_uid = root_uid;
inode->i_gid = root_gid;
inode->i_uid = ptmx_uid;
inode->i_gid = ptmx_gid;
d_add(dentry, inode);
@ -336,7 +331,6 @@ static int devpts_remount(struct super_block *sb, int *flags, char *data)
struct pts_fs_info *fsi = DEVPTS_SB(sb);
struct pts_mount_opts *opts = &fsi->mount_opts;
sync_filesystem(sb);
err = parse_mount_options(data, PARSE_REMOUNT, opts);
/*
@ -395,6 +389,7 @@ static int
devpts_fill_super(struct super_block *s, void *data, int silent)
{
struct inode *inode;
int error;
s->s_iflags &= ~SB_I_NODEV;
s->s_blocksize = 1024;
@ -403,10 +398,16 @@ devpts_fill_super(struct super_block *s, void *data, int silent)
s->s_op = &devpts_sops;
s->s_time_gran = 1;
error = -ENOMEM;
s->s_fs_info = new_pts_fs_info(s);
if (!s->s_fs_info)
goto fail;
error = parse_mount_options(data, PARSE_MOUNT, &DEVPTS_SB(s)->mount_opts);
if (error)
goto fail;
error = -ENOMEM;
inode = new_inode(s);
if (!inode)
goto fail;
@ -418,13 +419,21 @@ devpts_fill_super(struct super_block *s, void *data, int silent)
set_nlink(inode, 2);
s->s_root = d_make_root(inode);
if (s->s_root)
return 0;
if (!s->s_root) {
pr_err("get root dentry failed\n");
goto fail;
}
pr_err("get root dentry failed\n");
error = mknod_ptmx(s);
if (error)
goto fail_dput;
return 0;
fail_dput:
dput(s->s_root);
s->s_root = NULL;
fail:
return -ENOMEM;
return error;
}
/*
@ -436,43 +445,15 @@ devpts_fill_super(struct super_block *s, void *data, int silent)
static struct dentry *devpts_mount(struct file_system_type *fs_type,
int flags, const char *dev_name, void *data)
{
int error;
struct pts_mount_opts opts;
struct super_block *s;
error = parse_mount_options(data, PARSE_MOUNT, &opts);
if (error)
return ERR_PTR(error);
s = sget(fs_type, NULL, set_anon_super, flags, NULL);
if (IS_ERR(s))
return ERR_CAST(s);
if (!s->s_root) {
error = devpts_fill_super(s, data, flags & MS_SILENT ? 1 : 0);
if (error)
goto out_undo_sget;
s->s_flags |= MS_ACTIVE;
}
memcpy(&(DEVPTS_SB(s))->mount_opts, &opts, sizeof(opts));
error = mknod_ptmx(s);
if (error)
goto out_undo_sget;
return dget(s->s_root);
out_undo_sget:
deactivate_locked_super(s);
return ERR_PTR(error);
return mount_nodev(fs_type, flags, data, devpts_fill_super);
}
static void devpts_kill_sb(struct super_block *sb)
{
struct pts_fs_info *fsi = DEVPTS_SB(sb);
ida_destroy(&fsi->allocated_ptys);
if (fsi)
ida_destroy(&fsi->allocated_ptys);
kfree(fsi);
kill_litter_super(sb);
}

View file

@ -1099,4 +1099,10 @@ extern bool acpi_has_watchdog(void);
static inline bool acpi_has_watchdog(void) { return false; }
#endif
#ifdef CONFIG_ACPI_SPCR_TABLE
int parse_spcr(bool earlycon);
#else
static inline int parse_spcr(bool earlycon) { return 0; }
#endif
#endif /*_LINUX_ACPI_H*/

View file

@ -53,8 +53,14 @@ enum amba_vendor {
AMBA_VENDOR_ST = 0x80,
AMBA_VENDOR_QCOM = 0x51,
AMBA_VENDOR_LSI = 0xb6,
AMBA_VENDOR_LINUX = 0xfe, /* This value is not official */
};
/* This is used to generate pseudo-ID for AMBA device */
#define AMBA_LINUX_ID(conf, rev, part) \
(((conf) & 0xff) << 24 | ((rev) & 0xf) << 20 | \
AMBA_VENDOR_LINUX << 12 | ((part) & 0xfff))
extern struct bus_type amba_bustype;
#define to_amba_device(d) container_of(d, struct amba_device, dev)

View file

@ -104,6 +104,15 @@
#define UART01x_FR_CTS 0x001
#define UART01x_FR_TMSK (UART01x_FR_TXFF + UART01x_FR_BUSY)
/*
* Some bits of Flag Register on ZTE device have different position from
* standard ones.
*/
#define ZX_UART01x_FR_BUSY 0x100
#define ZX_UART01x_FR_DSR 0x008
#define ZX_UART01x_FR_CTS 0x002
#define ZX_UART011_FR_RI 0x001
#define UART011_CR_CTSEN 0x8000 /* CTS hardware flow control */
#define UART011_CR_RTSEN 0x4000 /* RTS hardware flow control */
#define UART011_CR_OUT2 0x2000 /* OUT2 */

View file

@ -118,6 +118,8 @@
#define ATMEL_US_BRGR 0x20 /* Baud Rate Generator Register */
#define ATMEL_US_CD GENMASK(15, 0) /* Clock Divider */
#define ATMEL_US_FP_OFFSET 16 /* Fractional Part */
#define ATMEL_US_FP_MASK 0x7
#define ATMEL_US_RTOR 0x24 /* Receiver Time-out Register for USART */
#define ATMEL_UA_RTOR 0x28 /* Receiver Time-out Register for UART */

View file

@ -40,8 +40,13 @@ struct dw_dma_chip {
};
/* Export to the platform drivers */
#if IS_ENABLED(CONFIG_DW_DMAC_CORE)
int dw_dma_probe(struct dw_dma_chip *chip);
int dw_dma_remove(struct dw_dma_chip *chip);
#else
static inline int dw_dma_probe(struct dw_dma_chip *chip) { return -ENODEV; }
static inline int dw_dma_remove(struct dw_dma_chip *chip) { return 0; }
#endif /* CONFIG_DW_DMAC_CORE */
/* DMA API extensions */
struct dw_desc;

View file

@ -41,8 +41,7 @@ struct hsu_dma_chip {
/* Export to the internal users */
int hsu_dma_get_status(struct hsu_dma_chip *chip, unsigned short nr,
u32 *status);
irqreturn_t hsu_dma_do_irq(struct hsu_dma_chip *chip, unsigned short nr,
u32 status);
int hsu_dma_do_irq(struct hsu_dma_chip *chip, unsigned short nr, u32 status);
/* Export to the platform drivers */
int hsu_dma_probe(struct hsu_dma_chip *chip);
@ -53,10 +52,10 @@ static inline int hsu_dma_get_status(struct hsu_dma_chip *chip,
{
return 0;
}
static inline irqreturn_t hsu_dma_do_irq(struct hsu_dma_chip *chip,
unsigned short nr, u32 status)
static inline int hsu_dma_do_irq(struct hsu_dma_chip *chip, unsigned short nr,
u32 status)
{
return IRQ_NONE;
return 0;
}
static inline int hsu_dma_probe(struct hsu_dma_chip *chip) { return -ENODEV; }
static inline int hsu_dma_remove(struct hsu_dma_chip *chip) { return 0; }

View file

@ -14,6 +14,7 @@
#include <linux/types.h>
#include <linux/init.h>
#include <linux/errno.h>
/* Definitions used by the flattened device tree */
#define OF_DT_HEADER 0xd00dfeed /* marker */
@ -66,6 +67,7 @@ extern int early_init_dt_scan_chosen(unsigned long node, const char *uname,
int depth, void *data);
extern int early_init_dt_scan_memory(unsigned long node, const char *uname,
int depth, void *data);
extern int early_init_dt_scan_chosen_stdout(void);
extern void early_init_fdt_scan_reserved_mem(void);
extern void early_init_fdt_reserve_self(void);
extern void early_init_dt_add_memory_arch(u64 base, u64 size);
@ -94,6 +96,7 @@ extern void early_get_first_memblock_info(void *, phys_addr_t *);
extern u64 of_flat_dt_translate_address(unsigned long node);
extern void of_fdt_limit_memory(int limit);
#else /* CONFIG_OF_FLATTREE */
static inline int early_init_dt_scan_chosen_stdout(void) { return -ENODEV; }
static inline void early_init_fdt_scan_reserved_mem(void) {}
static inline void early_init_fdt_reserve_self(void) {}
static inline const char *of_flat_dt_get_machine_name(void) { return NULL; }

View file

@ -23,6 +23,7 @@
* @dst_id: dst request line
* @m_master: memory master for transfers on allocated channel
* @p_master: peripheral master for transfers on allocated channel
* @hs_polarity:set active low polarity of handshake interface
*/
struct dw_dma_slave {
struct device *dma_dev;
@ -30,6 +31,7 @@ struct dw_dma_slave {
u8 dst_id;
u8 m_master;
u8 p_master;
bool hs_polarity;
};
/**
@ -38,6 +40,7 @@ struct dw_dma_slave {
* @is_private: The device channels should be marked as private and not for
* by the general purpose DMA channel allocator.
* @is_memcpy: The device channels do support memory-to-memory transfers.
* @is_nollp: The device channels does not support multi block transfers.
* @chan_allocation_order: Allocate channels starting from 0 or 7
* @chan_priority: Set channel priority increasing from 0 to 7 or 7 to 0.
* @block_size: Maximum block size supported by the controller
@ -49,6 +52,7 @@ struct dw_dma_platform_data {
unsigned int nr_channels;
bool is_private;
bool is_memcpy;
bool is_nollp;
#define CHAN_ALLOCATION_ASCENDING 0 /* zero to seven */
#define CHAN_ALLOCATION_DESCENDING 1 /* seven to zero */
unsigned char chan_allocation_order;

View file

@ -367,14 +367,21 @@ extern const struct earlycon_id __earlycon_table_end[];
#define EARLYCON_DECLARE(_name, fn) OF_EARLYCON_DECLARE(_name, "", fn)
extern int setup_earlycon(char *buf);
extern int of_setup_earlycon(const struct earlycon_id *match,
unsigned long node,
const char *options);
#ifdef CONFIG_SERIAL_EARLYCON
extern bool earlycon_init_is_deferred __initdata;
int setup_earlycon(char *buf);
#else
static const bool earlycon_init_is_deferred;
static inline int setup_earlycon(char *buf) { return 0; }
#endif
struct uart_port *uart_get_console(struct uart_port *ports, int nr,
struct console *c);
int uart_parse_earlycon(char *p, unsigned char *iotype, unsigned long *addr,
int uart_parse_earlycon(char *p, unsigned char *iotype, resource_size_t *addr,
char **options);
void uart_parse_options(char *options, int *baud, int *parity, int *bits,
int *flow);
@ -412,7 +419,7 @@ int uart_resume_port(struct uart_driver *reg, struct uart_port *port);
static inline int uart_tx_stopped(struct uart_port *port)
{
struct tty_struct *tty = port->state->port.tty;
if (tty->stopped || port->hw_stopped)
if ((tty && tty->stopped) || port->hw_stopped)
return 1;
return 0;
}

View file

@ -376,5 +376,13 @@
#define UART_EXAR_TXTRG 0x0a /* Tx FIFO trigger level write-only */
#define UART_EXAR_RXTRG 0x0b /* Rx FIFO trigger level write-only */
/*
* These are definitions for the Altera ALTR_16550_F32/F64/F128
* Normalized from 0x100 to 0x40 because of shift by 2 (32 bit regs).
*/
#define UART_ALTR_AFR 0x40 /* Additional Features Register */
#define UART_ALTR_EN_TXFIFO_LW 0x01 /* Enable the TX FIFO Low Watermark */
#define UART_ALTR_TX_LOW 0x41 /* Tx FIFO Low Watermark */
#endif /* _LINUX_SERIAL_REG_H */