Add support of Cavium Liquidio ethernet adapters
Following patch V8 adds support for Cavium Liquidio pci express based 10Gig ethernet adapters. 1) Consolidated all debug macros to either call dev_* or netdev_* macros directly, feedback from previous patch. 2) Changed soft commands to avoid crash when running in interrupt context. 3) Fixed link status not reflecting correct status when NetworkManager is running. Added MODULE_FIRMWARE declarations. Following were the previous patches. Patch V7: 1) Minor comments from v6 release regarding debug statements. 2) Fix for large multicast lists. 3) Fixed lockup issue if port initialization fails. 4) Enabled MSI by default. https://patchwork.ozlabs.org/patch/464441/ Patch V6: 1) Addressed the uint64 vs u64 issue, feedback from previous patch. 2) Consolidated some receive processing routines. 3) Removed link status polling method. https://patchwork.ozlabs.org/patch/459514/ Patch V5: Based on the feedback from earlier patches with regards to consolidation of common functions like device init, register programming for cn66xx and cn68xx devices. https://patchwork.ozlabs.org/patch/438979/ Patch V4: Following were the changes based on the feedback from earlier patch: 1) Added mmiowb while synchronizing queue updates and other hw interactions. 2) Statistics will now be incremented non-atomically per each ring. liquidio_get_stats will add stats of each ring while reporting the total statistics counts. 3) Modified liquidio_ioctl to return proper return codes. 4) Modified device naming to use standard Ethernet naming. 5) Global function names in the driver will have lio_/liquidio_/octeon_ prefix. 6) Ethtool related changes for: Removed redundant stats and jiffies. Use default ethtool handler of link status. Speed setting will make use of ethtool_cmd_speed_set. 7) Added checks for pci_map_* return codes. 8) Check for signals while waiting in interruptible mode https://patchwork.ozlabs.org/patch/435073/ Patch v3: Implemented feedback from previous patch like: Removed NAPI Config and DEBUG config options, added BQL and xmit_more support. https://patchwork.ozlabs.org/patch/422749/ Patch V2: Implemented feedback from previous patch. https://patchwork.ozlabs.org/patch/413539/ First Patch: https://patchwork.ozlabs.org/patch/412946/ Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Robert Richter <Robert.Richter@caviumnetworks.com> Signed-off-by: Aleksey Makarov <Aleksey.Makarov@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
parent
048856f4f2
commit
f21fb3ed36
30 changed files with 14457 additions and 12 deletions
11
MAINTAINERS
11
MAINTAINERS
|
@ -2442,6 +2442,17 @@ S: Maintained
|
|||
F: drivers/iio/light/cm*
|
||||
F: Documentation/devicetree/bindings/i2c/trivial-devices.txt
|
||||
|
||||
CAVIUM LIQUIDIO NETWORK DRIVER
|
||||
M: Derek Chickles <derek.chickles@caviumnetworks.com>
|
||||
M: Satanand Burla <satananda.burla@caviumnetworks.com>
|
||||
M: Felix Manlunas <felix.manlunas@caviumnetworks.com>
|
||||
M: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com>
|
||||
L: netdev@vger.kernel.org
|
||||
W: http://www.cavium.com
|
||||
S: Supported
|
||||
F: drivers/net/ethernet/cavium/
|
||||
F: drivers/net/ethernet/cavium/liquidio/
|
||||
|
||||
CC2520 IEEE-802.15.4 RADIO DRIVER
|
||||
M: Varka Bhadram <varkabhadram@gmail.com>
|
||||
L: linux-wpan@vger.kernel.org
|
||||
|
|
|
@ -4,37 +4,53 @@
|
|||
|
||||
config NET_VENDOR_CAVIUM
|
||||
tristate "Cavium ethernet drivers"
|
||||
depends on PCI && 64BIT
|
||||
depends on PCI
|
||||
default y
|
||||
---help---
|
||||
Enable support for the Cavium ThunderX Network Interface
|
||||
Controller (NIC). The NIC provides the controller and DMA
|
||||
engines to move network traffic to/from the memory. The NIC
|
||||
works closely with TNS, BGX and SerDes to implement the
|
||||
functions replacing and virtualizing those of a typical
|
||||
standalone PCIe NIC chip.
|
||||
Select this option if you want enable Cavium network support.
|
||||
|
||||
If you have a Cavium Thunder board, say Y.
|
||||
If you have a Cavium SoC or network adapter, say Y.
|
||||
|
||||
if NET_VENDOR_CAVIUM
|
||||
|
||||
config THUNDER_NIC_PF
|
||||
tristate "Thunder Physical function driver"
|
||||
default NET_VENDOR_CAVIUM
|
||||
depends on 64BIT
|
||||
default ARCH_THUNDER
|
||||
select THUNDER_NIC_BGX
|
||||
---help---
|
||||
This driver supports Thunder's NIC physical function.
|
||||
The NIC provides the controller and DMA engines to
|
||||
move network traffic to/from the memory. The NIC
|
||||
works closely with TNS, BGX and SerDes to implement the
|
||||
functions replacing and virtualizing those of a typical
|
||||
standalone PCIe NIC chip.
|
||||
|
||||
config THUNDER_NIC_VF
|
||||
tristate "Thunder Virtual function driver"
|
||||
default NET_VENDOR_CAVIUM
|
||||
depends on 64BIT
|
||||
default ARCH_THUNDER
|
||||
---help---
|
||||
This driver supports Thunder's NIC virtual function
|
||||
|
||||
config THUNDER_NIC_BGX
|
||||
tristate "Thunder MAC interface driver (BGX)"
|
||||
default NET_VENDOR_CAVIUM
|
||||
depends on 64BIT
|
||||
default ARCH_THUNDER
|
||||
---help---
|
||||
This driver supports programming and controlling of MAC
|
||||
interface from NIC physical function driver.
|
||||
|
||||
config LIQUIDIO
|
||||
tristate "Cavium LiquidIO support"
|
||||
select PTP_1588_CLOCK
|
||||
select FW_LOADER
|
||||
select LIBCRC32
|
||||
---help---
|
||||
This driver supports Cavium LiquidIO Intelligent Server Adapters
|
||||
based on CN66XX and CN68XX chips.
|
||||
|
||||
To compile this driver as a module, choose M here: the module
|
||||
will be called liquidio. This is recommended.
|
||||
|
||||
endif # NET_VENDOR_CAVIUM
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
#
|
||||
# Makefile for the Cavium ethernet device drivers.
|
||||
#
|
||||
|
||||
obj-$(CONFIG_NET_VENDOR_CAVIUM) += thunder/
|
||||
obj-$(CONFIG_NET_VENDOR_CAVIUM) += liquidio/
|
||||
|
|
16
drivers/net/ethernet/cavium/liquidio/Makefile
Normal file
16
drivers/net/ethernet/cavium/liquidio/Makefile
Normal file
|
@ -0,0 +1,16 @@
|
|||
#
|
||||
# Cavium Liquidio ethernet device driver
|
||||
#
|
||||
obj-$(CONFIG_LIQUIDIO) += liquidio.o
|
||||
|
||||
liquidio-objs := lio_main.o \
|
||||
lio_ethtool.o \
|
||||
request_manager.o \
|
||||
response_manager.o \
|
||||
octeon_device.o \
|
||||
cn66xx_device.o \
|
||||
cn68xx_device.o \
|
||||
octeon_mem_ops.o \
|
||||
octeon_droq.o \
|
||||
octeon_console.o \
|
||||
octeon_nic.o
|
796
drivers/net/ethernet/cavium/liquidio/cn66xx_device.c
Normal file
796
drivers/net/ethernet/cavium/liquidio/cn66xx_device.c
Normal file
|
@ -0,0 +1,796 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
#include <linux/version.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include "octeon_config.h"
|
||||
#include "liquidio_common.h"
|
||||
#include "octeon_droq.h"
|
||||
#include "octeon_iq.h"
|
||||
#include "response_manager.h"
|
||||
#include "octeon_device.h"
|
||||
#include "octeon_nic.h"
|
||||
#include "octeon_main.h"
|
||||
#include "octeon_network.h"
|
||||
#include "cn66xx_regs.h"
|
||||
#include "cn66xx_device.h"
|
||||
#include "liquidio_image.h"
|
||||
#include "octeon_mem_ops.h"
|
||||
|
||||
int lio_cn6xxx_soft_reset(struct octeon_device *oct)
|
||||
{
|
||||
octeon_write_csr64(oct, CN6XXX_WIN_WR_MASK_REG, 0xFF);
|
||||
|
||||
dev_dbg(&oct->pci_dev->dev, "BIST enabled for soft reset\n");
|
||||
|
||||
lio_pci_writeq(oct, 1, CN6XXX_CIU_SOFT_BIST);
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_SCRATCH1, 0x1234ULL);
|
||||
|
||||
lio_pci_readq(oct, CN6XXX_CIU_SOFT_RST);
|
||||
lio_pci_writeq(oct, 1, CN6XXX_CIU_SOFT_RST);
|
||||
|
||||
/* make sure that the reset is written before starting timer */
|
||||
mmiowb();
|
||||
|
||||
/* Wait for 10ms as Octeon resets. */
|
||||
mdelay(100);
|
||||
|
||||
if (octeon_read_csr64(oct, CN6XXX_SLI_SCRATCH1) == 0x1234ULL) {
|
||||
dev_err(&oct->pci_dev->dev, "Soft reset failed\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
dev_dbg(&oct->pci_dev->dev, "Reset completed\n");
|
||||
octeon_write_csr64(oct, CN6XXX_WIN_WR_MASK_REG, 0xFF);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void lio_cn6xxx_enable_error_reporting(struct octeon_device *oct)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
pci_read_config_dword(oct->pci_dev, CN6XXX_PCIE_DEVCTL, &val);
|
||||
if (val & 0x000f0000) {
|
||||
dev_err(&oct->pci_dev->dev, "PCI-E Link error detected: 0x%08x\n",
|
||||
val & 0x000f0000);
|
||||
}
|
||||
|
||||
val |= 0xf; /* Enable Link error reporting */
|
||||
|
||||
dev_dbg(&oct->pci_dev->dev, "Enabling PCI-E error reporting..\n");
|
||||
pci_write_config_dword(oct->pci_dev, CN6XXX_PCIE_DEVCTL, val);
|
||||
}
|
||||
|
||||
void lio_cn6xxx_setup_pcie_mps(struct octeon_device *oct,
|
||||
enum octeon_pcie_mps mps)
|
||||
{
|
||||
u32 val;
|
||||
u64 r64;
|
||||
|
||||
/* Read config register for MPS */
|
||||
pci_read_config_dword(oct->pci_dev, CN6XXX_PCIE_DEVCTL, &val);
|
||||
|
||||
if (mps == PCIE_MPS_DEFAULT) {
|
||||
mps = ((val & (0x7 << 5)) >> 5);
|
||||
} else {
|
||||
val &= ~(0x7 << 5); /* Turn off any MPS bits */
|
||||
val |= (mps << 5); /* Set MPS */
|
||||
pci_write_config_dword(oct->pci_dev, CN6XXX_PCIE_DEVCTL, val);
|
||||
}
|
||||
|
||||
/* Set MPS in DPI_SLI_PRT0_CFG to the same value. */
|
||||
r64 = lio_pci_readq(oct, CN6XXX_DPI_SLI_PRTX_CFG(oct->pcie_port));
|
||||
r64 |= (mps << 4);
|
||||
lio_pci_writeq(oct, r64, CN6XXX_DPI_SLI_PRTX_CFG(oct->pcie_port));
|
||||
}
|
||||
|
||||
void lio_cn6xxx_setup_pcie_mrrs(struct octeon_device *oct,
|
||||
enum octeon_pcie_mrrs mrrs)
|
||||
{
|
||||
u32 val;
|
||||
u64 r64;
|
||||
|
||||
/* Read config register for MRRS */
|
||||
pci_read_config_dword(oct->pci_dev, CN6XXX_PCIE_DEVCTL, &val);
|
||||
|
||||
if (mrrs == PCIE_MRRS_DEFAULT) {
|
||||
mrrs = ((val & (0x7 << 12)) >> 12);
|
||||
} else {
|
||||
val &= ~(0x7 << 12); /* Turn off any MRRS bits */
|
||||
val |= (mrrs << 12); /* Set MRRS */
|
||||
pci_write_config_dword(oct->pci_dev, CN6XXX_PCIE_DEVCTL, val);
|
||||
}
|
||||
|
||||
/* Set MRRS in SLI_S2M_PORT0_CTL to the same value. */
|
||||
r64 = octeon_read_csr64(oct, CN6XXX_SLI_S2M_PORTX_CTL(oct->pcie_port));
|
||||
r64 |= mrrs;
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_S2M_PORTX_CTL(oct->pcie_port), r64);
|
||||
|
||||
/* Set MRRS in DPI_SLI_PRT0_CFG to the same value. */
|
||||
r64 = lio_pci_readq(oct, CN6XXX_DPI_SLI_PRTX_CFG(oct->pcie_port));
|
||||
r64 |= mrrs;
|
||||
lio_pci_writeq(oct, r64, CN6XXX_DPI_SLI_PRTX_CFG(oct->pcie_port));
|
||||
}
|
||||
|
||||
u32 lio_cn6xxx_coprocessor_clock(struct octeon_device *oct)
|
||||
{
|
||||
/* Bits 29:24 of MIO_RST_BOOT holds the ref. clock multiplier
|
||||
* for SLI.
|
||||
*/
|
||||
return ((lio_pci_readq(oct, CN6XXX_MIO_RST_BOOT) >> 24) & 0x3f) * 50;
|
||||
}
|
||||
|
||||
u32 lio_cn6xxx_get_oq_ticks(struct octeon_device *oct,
|
||||
u32 time_intr_in_us)
|
||||
{
|
||||
/* This gives the SLI clock per microsec */
|
||||
u32 oqticks_per_us = lio_cn6xxx_coprocessor_clock(oct);
|
||||
|
||||
/* core clock per us / oq ticks will be fractional. TO avoid that
|
||||
* we use the method below.
|
||||
*/
|
||||
|
||||
/* This gives the clock cycles per millisecond */
|
||||
oqticks_per_us *= 1000;
|
||||
|
||||
/* This gives the oq ticks (1024 core clock cycles) per millisecond */
|
||||
oqticks_per_us /= 1024;
|
||||
|
||||
/* time_intr is in microseconds. The next 2 steps gives the oq ticks
|
||||
* corressponding to time_intr.
|
||||
*/
|
||||
oqticks_per_us *= time_intr_in_us;
|
||||
oqticks_per_us /= 1000;
|
||||
|
||||
return oqticks_per_us;
|
||||
}
|
||||
|
||||
void lio_cn6xxx_setup_global_input_regs(struct octeon_device *oct)
|
||||
{
|
||||
/* Select Round-Robin Arb, ES, RO, NS for Input Queues */
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_INPUT_CONTROL,
|
||||
CN6XXX_INPUT_CTL_MASK);
|
||||
|
||||
/* Instruction Read Size - Max 4 instructions per PCIE Read */
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_PKT_INSTR_RD_SIZE,
|
||||
0xFFFFFFFFFFFFFFFFULL);
|
||||
|
||||
/* Select PCIE Port for all Input rings. */
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_IN_PCIE_PORT,
|
||||
(oct->pcie_port * 0x5555555555555555ULL));
|
||||
}
|
||||
|
||||
static void lio_cn66xx_setup_pkt_ctl_regs(struct octeon_device *oct)
|
||||
{
|
||||
u64 pktctl;
|
||||
|
||||
struct octeon_cn6xxx *cn6xxx = (struct octeon_cn6xxx *)oct->chip;
|
||||
|
||||
pktctl = octeon_read_csr64(oct, CN6XXX_SLI_PKT_CTL);
|
||||
|
||||
/* 66XX SPECIFIC */
|
||||
if (CFG_GET_OQ_MAX_Q(cn6xxx->conf) <= 4)
|
||||
/* Disable RING_EN if only upto 4 rings are used. */
|
||||
pktctl &= ~(1 << 4);
|
||||
else
|
||||
pktctl |= (1 << 4);
|
||||
|
||||
if (CFG_GET_IS_SLI_BP_ON(cn6xxx->conf))
|
||||
pktctl |= 0xF;
|
||||
else
|
||||
/* Disable per-port backpressure. */
|
||||
pktctl &= ~0xF;
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_PKT_CTL, pktctl);
|
||||
}
|
||||
|
||||
void lio_cn6xxx_setup_global_output_regs(struct octeon_device *oct)
|
||||
{
|
||||
u32 time_threshold;
|
||||
struct octeon_cn6xxx *cn6xxx = (struct octeon_cn6xxx *)oct->chip;
|
||||
|
||||
/* / Select PCI-E Port for all Output queues */
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_PKT_PCIE_PORT64,
|
||||
(oct->pcie_port * 0x5555555555555555ULL));
|
||||
|
||||
if (CFG_GET_IS_SLI_BP_ON(cn6xxx->conf)) {
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_OQ_WMARK, 32);
|
||||
} else {
|
||||
/* / Set Output queue watermark to 0 to disable backpressure */
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_OQ_WMARK, 0);
|
||||
}
|
||||
|
||||
/* / Select Info Ptr for length & data */
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_IPTR, 0xFFFFFFFF);
|
||||
|
||||
/* / Select Packet count instead of bytes for SLI_PKTi_CNTS[CNT] */
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_OUT_BMODE, 0);
|
||||
|
||||
/* / Select ES,RO,NS setting from register for Output Queue Packet
|
||||
* Address
|
||||
*/
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_DPADDR, 0xFFFFFFFF);
|
||||
|
||||
/* No Relaxed Ordering, No Snoop, 64-bit swap for Output
|
||||
* Queue ScatterList
|
||||
*/
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_SLIST_ROR, 0);
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_SLIST_NS, 0);
|
||||
|
||||
/* / ENDIAN_SPECIFIC CHANGES - 0 works for LE. */
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_PKT_SLIST_ES64,
|
||||
0x5555555555555555ULL);
|
||||
#else
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_PKT_SLIST_ES64, 0ULL);
|
||||
#endif
|
||||
|
||||
/* / No Relaxed Ordering, No Snoop, 64-bit swap for Output Queue Data */
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_DATA_OUT_ROR, 0);
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_DATA_OUT_NS, 0);
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_PKT_DATA_OUT_ES64,
|
||||
0x5555555555555555ULL);
|
||||
|
||||
/* / Set up interrupt packet and time threshold */
|
||||
octeon_write_csr(oct, CN6XXX_SLI_OQ_INT_LEVEL_PKTS,
|
||||
(u32)CFG_GET_OQ_INTR_PKT(cn6xxx->conf));
|
||||
time_threshold =
|
||||
lio_cn6xxx_get_oq_ticks(oct, (u32)
|
||||
CFG_GET_OQ_INTR_TIME(cn6xxx->conf));
|
||||
|
||||
octeon_write_csr(oct, CN6XXX_SLI_OQ_INT_LEVEL_TIME, time_threshold);
|
||||
}
|
||||
|
||||
static int lio_cn6xxx_setup_device_regs(struct octeon_device *oct)
|
||||
{
|
||||
lio_cn6xxx_setup_pcie_mps(oct, PCIE_MPS_DEFAULT);
|
||||
lio_cn6xxx_setup_pcie_mrrs(oct, PCIE_MRRS_512B);
|
||||
lio_cn6xxx_enable_error_reporting(oct);
|
||||
|
||||
lio_cn6xxx_setup_global_input_regs(oct);
|
||||
lio_cn66xx_setup_pkt_ctl_regs(oct);
|
||||
lio_cn6xxx_setup_global_output_regs(oct);
|
||||
|
||||
/* Default error timeout value should be 0x200000 to avoid host hang
|
||||
* when reads invalid register
|
||||
*/
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_WINDOW_CTL, 0x200000ULL);
|
||||
return 0;
|
||||
}
|
||||
|
||||
void lio_cn6xxx_setup_iq_regs(struct octeon_device *oct, u32 iq_no)
|
||||
{
|
||||
struct octeon_instr_queue *iq = oct->instr_queue[iq_no];
|
||||
|
||||
/* Disable Packet-by-Packet mode; No Parse Mode or Skip length */
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_IQ_PKT_INSTR_HDR64(iq_no), 0);
|
||||
|
||||
/* Write the start of the input queue's ring and its size */
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_IQ_BASE_ADDR64(iq_no),
|
||||
iq->base_addr_dma);
|
||||
octeon_write_csr(oct, CN6XXX_SLI_IQ_SIZE(iq_no), iq->max_count);
|
||||
|
||||
/* Remember the doorbell & instruction count register addr for this
|
||||
* queue
|
||||
*/
|
||||
iq->doorbell_reg = oct->mmio[0].hw_addr + CN6XXX_SLI_IQ_DOORBELL(iq_no);
|
||||
iq->inst_cnt_reg = oct->mmio[0].hw_addr
|
||||
+ CN6XXX_SLI_IQ_INSTR_COUNT(iq_no);
|
||||
dev_dbg(&oct->pci_dev->dev, "InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p\n",
|
||||
iq_no, iq->doorbell_reg, iq->inst_cnt_reg);
|
||||
|
||||
/* Store the current instruction counter
|
||||
* (used in flush_iq calculation)
|
||||
*/
|
||||
iq->reset_instr_cnt = readl(iq->inst_cnt_reg);
|
||||
}
|
||||
|
||||
static void lio_cn66xx_setup_iq_regs(struct octeon_device *oct, u32 iq_no)
|
||||
{
|
||||
lio_cn6xxx_setup_iq_regs(oct, iq_no);
|
||||
|
||||
/* Backpressure for this queue - WMARK set to all F's. This effectively
|
||||
* disables the backpressure mechanism.
|
||||
*/
|
||||
octeon_write_csr64(oct, CN66XX_SLI_IQ_BP64(iq_no),
|
||||
(0xFFFFFFFFULL << 32));
|
||||
}
|
||||
|
||||
void lio_cn6xxx_setup_oq_regs(struct octeon_device *oct, u32 oq_no)
|
||||
{
|
||||
u32 intr;
|
||||
struct octeon_droq *droq = oct->droq[oq_no];
|
||||
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_OQ_BASE_ADDR64(oq_no),
|
||||
droq->desc_ring_dma);
|
||||
octeon_write_csr(oct, CN6XXX_SLI_OQ_SIZE(oq_no), droq->max_count);
|
||||
|
||||
octeon_write_csr(oct, CN6XXX_SLI_OQ_BUFF_INFO_SIZE(oq_no),
|
||||
(droq->buffer_size | (OCT_RH_SIZE << 16)));
|
||||
|
||||
/* Get the mapped address of the pkt_sent and pkts_credit regs */
|
||||
droq->pkts_sent_reg =
|
||||
oct->mmio[0].hw_addr + CN6XXX_SLI_OQ_PKTS_SENT(oq_no);
|
||||
droq->pkts_credit_reg =
|
||||
oct->mmio[0].hw_addr + CN6XXX_SLI_OQ_PKTS_CREDIT(oq_no);
|
||||
|
||||
/* Enable this output queue to generate Packet Timer Interrupt */
|
||||
intr = octeon_read_csr(oct, CN6XXX_SLI_PKT_TIME_INT_ENB);
|
||||
intr |= (1 << oq_no);
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_TIME_INT_ENB, intr);
|
||||
|
||||
/* Enable this output queue to generate Packet Timer Interrupt */
|
||||
intr = octeon_read_csr(oct, CN6XXX_SLI_PKT_CNT_INT_ENB);
|
||||
intr |= (1 << oq_no);
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_CNT_INT_ENB, intr);
|
||||
}
|
||||
|
||||
void lio_cn6xxx_enable_io_queues(struct octeon_device *oct)
|
||||
{
|
||||
u32 mask;
|
||||
|
||||
mask = octeon_read_csr(oct, CN6XXX_SLI_PKT_INSTR_SIZE);
|
||||
mask |= oct->io_qmask.iq64B;
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_INSTR_SIZE, mask);
|
||||
|
||||
mask = octeon_read_csr(oct, CN6XXX_SLI_PKT_INSTR_ENB);
|
||||
mask |= oct->io_qmask.iq;
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_INSTR_ENB, mask);
|
||||
|
||||
mask = octeon_read_csr(oct, CN6XXX_SLI_PKT_OUT_ENB);
|
||||
mask |= oct->io_qmask.oq;
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_OUT_ENB, mask);
|
||||
}
|
||||
|
||||
void lio_cn6xxx_disable_io_queues(struct octeon_device *oct)
|
||||
{
|
||||
u32 mask, i, loop = HZ;
|
||||
u32 d32;
|
||||
|
||||
/* Reset the Enable bits for Input Queues. */
|
||||
mask = octeon_read_csr(oct, CN6XXX_SLI_PKT_INSTR_ENB);
|
||||
mask ^= oct->io_qmask.iq;
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_INSTR_ENB, mask);
|
||||
|
||||
/* Wait until hardware indicates that the queues are out of reset. */
|
||||
mask = oct->io_qmask.iq;
|
||||
d32 = octeon_read_csr(oct, CN6XXX_SLI_PORT_IN_RST_IQ);
|
||||
while (((d32 & mask) != mask) && loop--) {
|
||||
d32 = octeon_read_csr(oct, CN6XXX_SLI_PORT_IN_RST_IQ);
|
||||
schedule_timeout_uninterruptible(1);
|
||||
}
|
||||
|
||||
/* Reset the doorbell register for each Input queue. */
|
||||
for (i = 0; i < MAX_OCTEON_INSTR_QUEUES; i++) {
|
||||
if (!(oct->io_qmask.iq & (1UL << i)))
|
||||
continue;
|
||||
octeon_write_csr(oct, CN6XXX_SLI_IQ_DOORBELL(i), 0xFFFFFFFF);
|
||||
d32 = octeon_read_csr(oct, CN6XXX_SLI_IQ_DOORBELL(i));
|
||||
}
|
||||
|
||||
/* Reset the Enable bits for Output Queues. */
|
||||
mask = octeon_read_csr(oct, CN6XXX_SLI_PKT_OUT_ENB);
|
||||
mask ^= oct->io_qmask.oq;
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_OUT_ENB, mask);
|
||||
|
||||
/* Wait until hardware indicates that the queues are out of reset. */
|
||||
loop = HZ;
|
||||
mask = oct->io_qmask.oq;
|
||||
d32 = octeon_read_csr(oct, CN6XXX_SLI_PORT_IN_RST_OQ);
|
||||
while (((d32 & mask) != mask) && loop--) {
|
||||
d32 = octeon_read_csr(oct, CN6XXX_SLI_PORT_IN_RST_OQ);
|
||||
schedule_timeout_uninterruptible(1);
|
||||
}
|
||||
;
|
||||
|
||||
/* Reset the doorbell register for each Output queue. */
|
||||
/* for (i = 0; i < oct->num_oqs; i++) { */
|
||||
for (i = 0; i < MAX_OCTEON_OUTPUT_QUEUES; i++) {
|
||||
if (!(oct->io_qmask.oq & (1UL << i)))
|
||||
continue;
|
||||
octeon_write_csr(oct, CN6XXX_SLI_OQ_PKTS_CREDIT(i), 0xFFFFFFFF);
|
||||
d32 = octeon_read_csr(oct, CN6XXX_SLI_OQ_PKTS_CREDIT(i));
|
||||
|
||||
d32 = octeon_read_csr(oct, CN6XXX_SLI_OQ_PKTS_SENT(i));
|
||||
octeon_write_csr(oct, CN6XXX_SLI_OQ_PKTS_SENT(i), d32);
|
||||
}
|
||||
|
||||
d32 = octeon_read_csr(oct, CN6XXX_SLI_PKT_CNT_INT);
|
||||
if (d32)
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_CNT_INT, d32);
|
||||
|
||||
d32 = octeon_read_csr(oct, CN6XXX_SLI_PKT_TIME_INT);
|
||||
if (d32)
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_TIME_INT, d32);
|
||||
}
|
||||
|
||||
void lio_cn6xxx_reinit_regs(struct octeon_device *oct)
|
||||
{
|
||||
u32 i;
|
||||
|
||||
for (i = 0; i < MAX_OCTEON_INSTR_QUEUES; i++) {
|
||||
if (!(oct->io_qmask.iq & (1UL << i)))
|
||||
continue;
|
||||
oct->fn_list.setup_iq_regs(oct, i);
|
||||
}
|
||||
|
||||
for (i = 0; i < MAX_OCTEON_OUTPUT_QUEUES; i++) {
|
||||
if (!(oct->io_qmask.oq & (1UL << i)))
|
||||
continue;
|
||||
oct->fn_list.setup_oq_regs(oct, i);
|
||||
}
|
||||
|
||||
oct->fn_list.setup_device_regs(oct);
|
||||
|
||||
oct->fn_list.enable_interrupt(oct->chip);
|
||||
|
||||
oct->fn_list.enable_io_queues(oct);
|
||||
|
||||
/* for (i = 0; i < oct->num_oqs; i++) { */
|
||||
for (i = 0; i < MAX_OCTEON_OUTPUT_QUEUES; i++) {
|
||||
if (!(oct->io_qmask.oq & (1UL << i)))
|
||||
continue;
|
||||
writel(oct->droq[i]->max_count, oct->droq[i]->pkts_credit_reg);
|
||||
}
|
||||
}
|
||||
|
||||
void
|
||||
lio_cn6xxx_bar1_idx_setup(struct octeon_device *oct,
|
||||
u64 core_addr,
|
||||
u32 idx,
|
||||
int valid)
|
||||
{
|
||||
u64 bar1;
|
||||
|
||||
if (valid == 0) {
|
||||
bar1 = lio_pci_readq(oct, CN6XXX_BAR1_REG(idx, oct->pcie_port));
|
||||
lio_pci_writeq(oct, (bar1 & 0xFFFFFFFEULL),
|
||||
CN6XXX_BAR1_REG(idx, oct->pcie_port));
|
||||
bar1 = lio_pci_readq(oct, CN6XXX_BAR1_REG(idx, oct->pcie_port));
|
||||
return;
|
||||
}
|
||||
|
||||
/* Bits 17:4 of the PCI_BAR1_INDEXx stores bits 35:22 of
|
||||
* the Core Addr
|
||||
*/
|
||||
lio_pci_writeq(oct, (((core_addr >> 22) << 4) | PCI_BAR1_MASK),
|
||||
CN6XXX_BAR1_REG(idx, oct->pcie_port));
|
||||
|
||||
bar1 = lio_pci_readq(oct, CN6XXX_BAR1_REG(idx, oct->pcie_port));
|
||||
}
|
||||
|
||||
void lio_cn6xxx_bar1_idx_write(struct octeon_device *oct,
|
||||
u32 idx,
|
||||
u32 mask)
|
||||
{
|
||||
lio_pci_writeq(oct, mask, CN6XXX_BAR1_REG(idx, oct->pcie_port));
|
||||
}
|
||||
|
||||
u32 lio_cn6xxx_bar1_idx_read(struct octeon_device *oct, u32 idx)
|
||||
{
|
||||
return (u32)lio_pci_readq(oct, CN6XXX_BAR1_REG(idx, oct->pcie_port));
|
||||
}
|
||||
|
||||
u32
|
||||
lio_cn6xxx_update_read_index(struct octeon_device *oct __attribute__((unused)),
|
||||
struct octeon_instr_queue *iq)
|
||||
{
|
||||
u32 new_idx = readl(iq->inst_cnt_reg);
|
||||
|
||||
/* The new instr cnt reg is a 32-bit counter that can roll over. We have
|
||||
* noted the counter's initial value at init time into
|
||||
* reset_instr_cnt
|
||||
*/
|
||||
if (iq->reset_instr_cnt < new_idx)
|
||||
new_idx -= iq->reset_instr_cnt;
|
||||
else
|
||||
new_idx += (0xffffffff - iq->reset_instr_cnt) + 1;
|
||||
|
||||
/* Modulo of the new index with the IQ size will give us
|
||||
* the new index.
|
||||
*/
|
||||
new_idx %= iq->max_count;
|
||||
|
||||
return new_idx;
|
||||
}
|
||||
|
||||
void lio_cn6xxx_enable_interrupt(void *chip)
|
||||
{
|
||||
struct octeon_cn6xxx *cn6xxx = (struct octeon_cn6xxx *)chip;
|
||||
u64 mask = cn6xxx->intr_mask64 | CN6XXX_INTR_DMA0_FORCE;
|
||||
|
||||
/* Enable Interrupt */
|
||||
writeq(mask, cn6xxx->intr_enb_reg64);
|
||||
}
|
||||
|
||||
void lio_cn6xxx_disable_interrupt(void *chip)
|
||||
{
|
||||
struct octeon_cn6xxx *cn6xxx = (struct octeon_cn6xxx *)chip;
|
||||
|
||||
/* Disable Interrupts */
|
||||
writeq(0, cn6xxx->intr_enb_reg64);
|
||||
|
||||
/* make sure interrupts are really disabled */
|
||||
mmiowb();
|
||||
}
|
||||
|
||||
void lio_cn6xxx_get_pcie_qlmport(struct octeon_device *oct)
|
||||
{
|
||||
/* CN63xx Pass2 and newer parts implements the SLI_MAC_NUMBER register
|
||||
* to determine the PCIE port #
|
||||
*/
|
||||
oct->pcie_port = octeon_read_csr(oct, CN6XXX_SLI_MAC_NUMBER) & 0xff;
|
||||
|
||||
dev_dbg(&oct->pci_dev->dev, "Using PCIE Port %d\n", oct->pcie_port);
|
||||
}
|
||||
|
||||
void
|
||||
lio_cn6xxx_process_pcie_error_intr(struct octeon_device *oct, u64 intr64)
|
||||
{
|
||||
dev_err(&oct->pci_dev->dev, "Error Intr: 0x%016llx\n",
|
||||
CVM_CAST64(intr64));
|
||||
}
|
||||
|
||||
int lio_cn6xxx_process_droq_intr_regs(struct octeon_device *oct)
|
||||
{
|
||||
struct octeon_droq *droq;
|
||||
u32 oq_no, pkt_count, droq_time_mask, droq_mask, droq_int_enb;
|
||||
u32 droq_cnt_enb, droq_cnt_mask;
|
||||
|
||||
droq_cnt_enb = octeon_read_csr(oct, CN6XXX_SLI_PKT_CNT_INT_ENB);
|
||||
droq_cnt_mask = octeon_read_csr(oct, CN6XXX_SLI_PKT_CNT_INT);
|
||||
droq_mask = droq_cnt_mask & droq_cnt_enb;
|
||||
|
||||
droq_time_mask = octeon_read_csr(oct, CN6XXX_SLI_PKT_TIME_INT);
|
||||
droq_int_enb = octeon_read_csr(oct, CN6XXX_SLI_PKT_TIME_INT_ENB);
|
||||
droq_mask |= (droq_time_mask & droq_int_enb);
|
||||
|
||||
droq_mask &= oct->io_qmask.oq;
|
||||
|
||||
oct->droq_intr = 0;
|
||||
|
||||
/* for (oq_no = 0; oq_no < oct->num_oqs; oq_no++) { */
|
||||
for (oq_no = 0; oq_no < MAX_OCTEON_OUTPUT_QUEUES; oq_no++) {
|
||||
if (!(droq_mask & (1 << oq_no)))
|
||||
continue;
|
||||
|
||||
droq = oct->droq[oq_no];
|
||||
pkt_count = octeon_droq_check_hw_for_pkts(oct, droq);
|
||||
if (pkt_count) {
|
||||
oct->droq_intr |= (1ULL << oq_no);
|
||||
if (droq->ops.poll_mode) {
|
||||
u32 value;
|
||||
u32 reg;
|
||||
|
||||
struct octeon_cn6xxx *cn6xxx =
|
||||
(struct octeon_cn6xxx *)oct->chip;
|
||||
|
||||
/* disable interrupts for this droq */
|
||||
spin_lock
|
||||
(&cn6xxx->lock_for_droq_int_enb_reg);
|
||||
reg = CN6XXX_SLI_PKT_TIME_INT_ENB;
|
||||
value = octeon_read_csr(oct, reg);
|
||||
value &= ~(1 << oq_no);
|
||||
octeon_write_csr(oct, reg, value);
|
||||
reg = CN6XXX_SLI_PKT_CNT_INT_ENB;
|
||||
value = octeon_read_csr(oct, reg);
|
||||
value &= ~(1 << oq_no);
|
||||
octeon_write_csr(oct, reg, value);
|
||||
|
||||
/* Ensure that the enable register is written.
|
||||
*/
|
||||
mmiowb();
|
||||
|
||||
spin_unlock(&cn6xxx->lock_for_droq_int_enb_reg);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
droq_time_mask &= oct->io_qmask.oq;
|
||||
droq_cnt_mask &= oct->io_qmask.oq;
|
||||
|
||||
/* Reset the PKT_CNT/TIME_INT registers. */
|
||||
if (droq_time_mask)
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_TIME_INT, droq_time_mask);
|
||||
|
||||
if (droq_cnt_mask) /* reset PKT_CNT register:66xx */
|
||||
octeon_write_csr(oct, CN6XXX_SLI_PKT_CNT_INT, droq_cnt_mask);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
irqreturn_t lio_cn6xxx_process_interrupt_regs(void *dev)
|
||||
{
|
||||
struct octeon_device *oct = (struct octeon_device *)dev;
|
||||
struct octeon_cn6xxx *cn6xxx = (struct octeon_cn6xxx *)oct->chip;
|
||||
u64 intr64;
|
||||
|
||||
intr64 = readq(cn6xxx->intr_sum_reg64);
|
||||
|
||||
/* If our device has interrupted, then proceed.
|
||||
* Also check for all f's if interrupt was triggered on an error
|
||||
* and the PCI read fails.
|
||||
*/
|
||||
if (!intr64 || (intr64 == 0xFFFFFFFFFFFFFFFFULL))
|
||||
return IRQ_NONE;
|
||||
|
||||
oct->int_status = 0;
|
||||
|
||||
if (intr64 & CN6XXX_INTR_ERR)
|
||||
lio_cn6xxx_process_pcie_error_intr(oct, intr64);
|
||||
|
||||
if (intr64 & CN6XXX_INTR_PKT_DATA) {
|
||||
lio_cn6xxx_process_droq_intr_regs(oct);
|
||||
oct->int_status |= OCT_DEV_INTR_PKT_DATA;
|
||||
}
|
||||
|
||||
if (intr64 & CN6XXX_INTR_DMA0_FORCE)
|
||||
oct->int_status |= OCT_DEV_INTR_DMA0_FORCE;
|
||||
|
||||
if (intr64 & CN6XXX_INTR_DMA1_FORCE)
|
||||
oct->int_status |= OCT_DEV_INTR_DMA1_FORCE;
|
||||
|
||||
/* Clear the current interrupts */
|
||||
writeq(intr64, cn6xxx->intr_sum_reg64);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
void lio_cn6xxx_setup_reg_address(struct octeon_device *oct,
|
||||
void *chip,
|
||||
struct octeon_reg_list *reg_list)
|
||||
{
|
||||
u8 __iomem *bar0_pciaddr = oct->mmio[0].hw_addr;
|
||||
struct octeon_cn6xxx *cn6xxx = (struct octeon_cn6xxx *)chip;
|
||||
|
||||
reg_list->pci_win_wr_addr_hi =
|
||||
(u32 __iomem *)(bar0_pciaddr + CN6XXX_WIN_WR_ADDR_HI);
|
||||
reg_list->pci_win_wr_addr_lo =
|
||||
(u32 __iomem *)(bar0_pciaddr + CN6XXX_WIN_WR_ADDR_LO);
|
||||
reg_list->pci_win_wr_addr =
|
||||
(u64 __iomem *)(bar0_pciaddr + CN6XXX_WIN_WR_ADDR64);
|
||||
|
||||
reg_list->pci_win_rd_addr_hi =
|
||||
(u32 __iomem *)(bar0_pciaddr + CN6XXX_WIN_RD_ADDR_HI);
|
||||
reg_list->pci_win_rd_addr_lo =
|
||||
(u32 __iomem *)(bar0_pciaddr + CN6XXX_WIN_RD_ADDR_LO);
|
||||
reg_list->pci_win_rd_addr =
|
||||
(u64 __iomem *)(bar0_pciaddr + CN6XXX_WIN_RD_ADDR64);
|
||||
|
||||
reg_list->pci_win_wr_data_hi =
|
||||
(u32 __iomem *)(bar0_pciaddr + CN6XXX_WIN_WR_DATA_HI);
|
||||
reg_list->pci_win_wr_data_lo =
|
||||
(u32 __iomem *)(bar0_pciaddr + CN6XXX_WIN_WR_DATA_LO);
|
||||
reg_list->pci_win_wr_data =
|
||||
(u64 __iomem *)(bar0_pciaddr + CN6XXX_WIN_WR_DATA64);
|
||||
|
||||
reg_list->pci_win_rd_data_hi =
|
||||
(u32 __iomem *)(bar0_pciaddr + CN6XXX_WIN_RD_DATA_HI);
|
||||
reg_list->pci_win_rd_data_lo =
|
||||
(u32 __iomem *)(bar0_pciaddr + CN6XXX_WIN_RD_DATA_LO);
|
||||
reg_list->pci_win_rd_data =
|
||||
(u64 __iomem *)(bar0_pciaddr + CN6XXX_WIN_RD_DATA64);
|
||||
|
||||
lio_cn6xxx_get_pcie_qlmport(oct);
|
||||
|
||||
cn6xxx->intr_sum_reg64 = bar0_pciaddr + CN6XXX_SLI_INT_SUM64;
|
||||
cn6xxx->intr_mask64 = CN6XXX_INTR_MASK;
|
||||
cn6xxx->intr_enb_reg64 =
|
||||
bar0_pciaddr + CN6XXX_SLI_INT_ENB64(oct->pcie_port);
|
||||
}
|
||||
|
||||
int lio_setup_cn66xx_octeon_device(struct octeon_device *oct)
|
||||
{
|
||||
struct octeon_cn6xxx *cn6xxx = (struct octeon_cn6xxx *)oct->chip;
|
||||
|
||||
if (octeon_map_pci_barx(oct, 0, 0))
|
||||
return 1;
|
||||
|
||||
if (octeon_map_pci_barx(oct, 1, MAX_BAR1_IOREMAP_SIZE)) {
|
||||
dev_err(&oct->pci_dev->dev, "%s CN66XX BAR1 map failed\n",
|
||||
__func__);
|
||||
octeon_unmap_pci_barx(oct, 0);
|
||||
return 1;
|
||||
}
|
||||
|
||||
spin_lock_init(&cn6xxx->lock_for_droq_int_enb_reg);
|
||||
|
||||
oct->fn_list.setup_iq_regs = lio_cn66xx_setup_iq_regs;
|
||||
oct->fn_list.setup_oq_regs = lio_cn6xxx_setup_oq_regs;
|
||||
|
||||
oct->fn_list.soft_reset = lio_cn6xxx_soft_reset;
|
||||
oct->fn_list.setup_device_regs = lio_cn6xxx_setup_device_regs;
|
||||
oct->fn_list.reinit_regs = lio_cn6xxx_reinit_regs;
|
||||
oct->fn_list.update_iq_read_idx = lio_cn6xxx_update_read_index;
|
||||
|
||||
oct->fn_list.bar1_idx_setup = lio_cn6xxx_bar1_idx_setup;
|
||||
oct->fn_list.bar1_idx_write = lio_cn6xxx_bar1_idx_write;
|
||||
oct->fn_list.bar1_idx_read = lio_cn6xxx_bar1_idx_read;
|
||||
|
||||
oct->fn_list.process_interrupt_regs = lio_cn6xxx_process_interrupt_regs;
|
||||
oct->fn_list.enable_interrupt = lio_cn6xxx_enable_interrupt;
|
||||
oct->fn_list.disable_interrupt = lio_cn6xxx_disable_interrupt;
|
||||
|
||||
oct->fn_list.enable_io_queues = lio_cn6xxx_enable_io_queues;
|
||||
oct->fn_list.disable_io_queues = lio_cn6xxx_disable_io_queues;
|
||||
|
||||
lio_cn6xxx_setup_reg_address(oct, oct->chip, &oct->reg_list);
|
||||
|
||||
cn6xxx->conf = (struct octeon_config *)
|
||||
oct_get_config_info(oct, LIO_210SV);
|
||||
if (!cn6xxx->conf) {
|
||||
dev_err(&oct->pci_dev->dev, "%s No Config found for CN66XX\n",
|
||||
__func__);
|
||||
octeon_unmap_pci_barx(oct, 0);
|
||||
octeon_unmap_pci_barx(oct, 1);
|
||||
return 1;
|
||||
}
|
||||
|
||||
oct->coproc_clock_rate = 1000000ULL * lio_cn6xxx_coprocessor_clock(oct);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int lio_validate_cn6xxx_config_info(struct octeon_device *oct,
|
||||
struct octeon_config *conf6xxx)
|
||||
{
|
||||
/* int total_instrs = 0; */
|
||||
|
||||
if (CFG_GET_IQ_MAX_Q(conf6xxx) > CN6XXX_MAX_INPUT_QUEUES) {
|
||||
dev_err(&oct->pci_dev->dev, "%s: Num IQ (%d) exceeds Max (%d)\n",
|
||||
__func__, CFG_GET_IQ_MAX_Q(conf6xxx),
|
||||
CN6XXX_MAX_INPUT_QUEUES);
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (CFG_GET_OQ_MAX_Q(conf6xxx) > CN6XXX_MAX_OUTPUT_QUEUES) {
|
||||
dev_err(&oct->pci_dev->dev, "%s: Num OQ (%d) exceeds Max (%d)\n",
|
||||
__func__, CFG_GET_OQ_MAX_Q(conf6xxx),
|
||||
CN6XXX_MAX_OUTPUT_QUEUES);
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (CFG_GET_IQ_INSTR_TYPE(conf6xxx) != OCTEON_32BYTE_INSTR &&
|
||||
CFG_GET_IQ_INSTR_TYPE(conf6xxx) != OCTEON_64BYTE_INSTR) {
|
||||
dev_err(&oct->pci_dev->dev, "%s: Invalid instr type for IQ\n",
|
||||
__func__);
|
||||
return 1;
|
||||
}
|
||||
if (!(CFG_GET_OQ_INFO_PTR(conf6xxx)) ||
|
||||
!(CFG_GET_OQ_REFILL_THRESHOLD(conf6xxx))) {
|
||||
dev_err(&oct->pci_dev->dev, "%s: Invalid parameter for OQ\n",
|
||||
__func__);
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (!(CFG_GET_OQ_INTR_TIME(conf6xxx))) {
|
||||
dev_err(&oct->pci_dev->dev, "%s: No Time Interrupt for OQ\n",
|
||||
__func__);
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
107
drivers/net/ethernet/cavium/liquidio/cn66xx_device.h
Normal file
107
drivers/net/ethernet/cavium/liquidio/cn66xx_device.h
Normal file
|
@ -0,0 +1,107 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
|
||||
/*! \file cn66xx_device.h
|
||||
* \brief Host Driver: Routines that perform CN66XX specific operations.
|
||||
*/
|
||||
|
||||
#ifndef __CN66XX_DEVICE_H__
|
||||
#define __CN66XX_DEVICE_H__
|
||||
|
||||
/* Register address and configuration for a CN6XXX devices.
|
||||
* If device specific changes need to be made then add a struct to include
|
||||
* device specific fields as shown in the commented section
|
||||
*/
|
||||
struct octeon_cn6xxx {
|
||||
/** PCI interrupt summary register */
|
||||
u8 __iomem *intr_sum_reg64;
|
||||
|
||||
/** PCI interrupt enable register */
|
||||
u8 __iomem *intr_enb_reg64;
|
||||
|
||||
/** The PCI interrupt mask used by interrupt handler */
|
||||
u64 intr_mask64;
|
||||
|
||||
struct octeon_config *conf;
|
||||
|
||||
/* Example additional fields - not used currently
|
||||
* struct {
|
||||
* }cn6xyz;
|
||||
*/
|
||||
|
||||
/* For the purpose of atomic access to interrupt enable reg */
|
||||
spinlock_t lock_for_droq_int_enb_reg;
|
||||
|
||||
};
|
||||
|
||||
enum octeon_pcie_mps {
|
||||
PCIE_MPS_DEFAULT = -1, /* Use the default setup by BIOS */
|
||||
PCIE_MPS_128B = 0,
|
||||
PCIE_MPS_256B = 1
|
||||
};
|
||||
|
||||
enum octeon_pcie_mrrs {
|
||||
PCIE_MRRS_DEFAULT = -1, /* Use the default setup by BIOS */
|
||||
PCIE_MRRS_128B = 0,
|
||||
PCIE_MRRS_256B = 1,
|
||||
PCIE_MRRS_512B = 2,
|
||||
PCIE_MRRS_1024B = 3,
|
||||
PCIE_MRRS_2048B = 4,
|
||||
PCIE_MRRS_4096B = 5
|
||||
};
|
||||
|
||||
/* Common functions for 66xx and 68xx */
|
||||
int lio_cn6xxx_soft_reset(struct octeon_device *oct);
|
||||
void lio_cn6xxx_enable_error_reporting(struct octeon_device *oct);
|
||||
void lio_cn6xxx_setup_pcie_mps(struct octeon_device *oct,
|
||||
enum octeon_pcie_mps mps);
|
||||
void lio_cn6xxx_setup_pcie_mrrs(struct octeon_device *oct,
|
||||
enum octeon_pcie_mrrs mrrs);
|
||||
void lio_cn6xxx_setup_global_input_regs(struct octeon_device *oct);
|
||||
void lio_cn6xxx_setup_global_output_regs(struct octeon_device *oct);
|
||||
void lio_cn6xxx_setup_iq_regs(struct octeon_device *oct, u32 iq_no);
|
||||
void lio_cn6xxx_setup_oq_regs(struct octeon_device *oct, u32 oq_no);
|
||||
void lio_cn6xxx_enable_io_queues(struct octeon_device *oct);
|
||||
void lio_cn6xxx_disable_io_queues(struct octeon_device *oct);
|
||||
void lio_cn6xxx_process_pcie_error_intr(struct octeon_device *oct, u64 intr64);
|
||||
int lio_cn6xxx_process_droq_intr_regs(struct octeon_device *oct);
|
||||
irqreturn_t lio_cn6xxx_process_interrupt_regs(void *dev);
|
||||
void lio_cn6xxx_reinit_regs(struct octeon_device *oct);
|
||||
void lio_cn6xxx_bar1_idx_setup(struct octeon_device *oct, u64 core_addr,
|
||||
u32 idx, int valid);
|
||||
void lio_cn6xxx_bar1_idx_write(struct octeon_device *oct, u32 idx, u32 mask);
|
||||
u32 lio_cn6xxx_bar1_idx_read(struct octeon_device *oct, u32 idx);
|
||||
u32
|
||||
lio_cn6xxx_update_read_index(struct octeon_device *oct __attribute__((unused)),
|
||||
struct octeon_instr_queue *iq);
|
||||
void lio_cn6xxx_enable_interrupt(void *chip);
|
||||
void lio_cn6xxx_disable_interrupt(void *chip);
|
||||
void cn6xxx_get_pcie_qlmport(struct octeon_device *oct);
|
||||
void lio_cn6xxx_setup_reg_address(struct octeon_device *oct, void *chip,
|
||||
struct octeon_reg_list *reg_list);
|
||||
u32 lio_cn6xxx_coprocessor_clock(struct octeon_device *oct);
|
||||
u32 lio_cn6xxx_get_oq_ticks(struct octeon_device *oct, u32 time_intr_in_us);
|
||||
int lio_setup_cn66xx_octeon_device(struct octeon_device *);
|
||||
int lio_validate_cn6xxx_config_info(struct octeon_device *oct,
|
||||
struct octeon_config *);
|
||||
|
||||
#endif
|
535
drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h
Normal file
535
drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h
Normal file
|
@ -0,0 +1,535 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
|
||||
/*! \file cn66xx_regs.h
|
||||
* \brief Host Driver: Register Address and Register Mask values for
|
||||
* Octeon CN66XX devices.
|
||||
*/
|
||||
|
||||
#ifndef __CN66XX_REGS_H__
|
||||
#define __CN66XX_REGS_H__
|
||||
|
||||
#define CN6XXX_XPANSION_BAR 0x30
|
||||
|
||||
#define CN6XXX_MSI_CAP 0x50
|
||||
#define CN6XXX_MSI_ADDR_LO 0x54
|
||||
#define CN6XXX_MSI_ADDR_HI 0x58
|
||||
#define CN6XXX_MSI_DATA 0x5C
|
||||
|
||||
#define CN6XXX_PCIE_CAP 0x70
|
||||
#define CN6XXX_PCIE_DEVCAP 0x74
|
||||
#define CN6XXX_PCIE_DEVCTL 0x78
|
||||
#define CN6XXX_PCIE_LINKCAP 0x7C
|
||||
#define CN6XXX_PCIE_LINKCTL 0x80
|
||||
#define CN6XXX_PCIE_SLOTCAP 0x84
|
||||
#define CN6XXX_PCIE_SLOTCTL 0x88
|
||||
|
||||
#define CN6XXX_PCIE_ENH_CAP 0x100
|
||||
#define CN6XXX_PCIE_UNCORR_ERR_STATUS 0x104
|
||||
#define CN6XXX_PCIE_UNCORR_ERR_MASK 0x108
|
||||
#define CN6XXX_PCIE_UNCORR_ERR 0x10C
|
||||
#define CN6XXX_PCIE_CORR_ERR_STATUS 0x110
|
||||
#define CN6XXX_PCIE_CORR_ERR_MASK 0x114
|
||||
#define CN6XXX_PCIE_ADV_ERR_CAP 0x118
|
||||
|
||||
#define CN6XXX_PCIE_ACK_REPLAY_TIMER 0x700
|
||||
#define CN6XXX_PCIE_OTHER_MSG 0x704
|
||||
#define CN6XXX_PCIE_PORT_FORCE_LINK 0x708
|
||||
#define CN6XXX_PCIE_ACK_FREQ 0x70C
|
||||
#define CN6XXX_PCIE_PORT_LINK_CTL 0x710
|
||||
#define CN6XXX_PCIE_LANE_SKEW 0x714
|
||||
#define CN6XXX_PCIE_SYM_NUM 0x718
|
||||
#define CN6XXX_PCIE_FLTMSK 0x720
|
||||
|
||||
/* ############## BAR0 Registers ################ */
|
||||
|
||||
#define CN6XXX_SLI_CTL_PORT0 0x0050
|
||||
#define CN6XXX_SLI_CTL_PORT1 0x0060
|
||||
|
||||
#define CN6XXX_SLI_WINDOW_CTL 0x02E0
|
||||
#define CN6XXX_SLI_DBG_DATA 0x0310
|
||||
#define CN6XXX_SLI_SCRATCH1 0x03C0
|
||||
#define CN6XXX_SLI_SCRATCH2 0x03D0
|
||||
#define CN6XXX_SLI_CTL_STATUS 0x0570
|
||||
|
||||
#define CN6XXX_WIN_WR_ADDR_LO 0x0000
|
||||
#define CN6XXX_WIN_WR_ADDR_HI 0x0004
|
||||
#define CN6XXX_WIN_WR_ADDR64 CN6XXX_WIN_WR_ADDR_LO
|
||||
|
||||
#define CN6XXX_WIN_RD_ADDR_LO 0x0010
|
||||
#define CN6XXX_WIN_RD_ADDR_HI 0x0014
|
||||
#define CN6XXX_WIN_RD_ADDR64 CN6XXX_WIN_RD_ADDR_LO
|
||||
|
||||
#define CN6XXX_WIN_WR_DATA_LO 0x0020
|
||||
#define CN6XXX_WIN_WR_DATA_HI 0x0024
|
||||
#define CN6XXX_WIN_WR_DATA64 CN6XXX_WIN_WR_DATA_LO
|
||||
|
||||
#define CN6XXX_WIN_RD_DATA_LO 0x0040
|
||||
#define CN6XXX_WIN_RD_DATA_HI 0x0044
|
||||
#define CN6XXX_WIN_RD_DATA64 CN6XXX_WIN_RD_DATA_LO
|
||||
|
||||
#define CN6XXX_WIN_WR_MASK_LO 0x0030
|
||||
#define CN6XXX_WIN_WR_MASK_HI 0x0034
|
||||
#define CN6XXX_WIN_WR_MASK_REG CN6XXX_WIN_WR_MASK_LO
|
||||
|
||||
/* 1 register (32-bit) to enable Input queues */
|
||||
#define CN6XXX_SLI_PKT_INSTR_ENB 0x1000
|
||||
|
||||
/* 1 register (32-bit) to enable Output queues */
|
||||
#define CN6XXX_SLI_PKT_OUT_ENB 0x1010
|
||||
|
||||
/* 1 register (32-bit) to determine whether Output queues are in reset. */
|
||||
#define CN6XXX_SLI_PORT_IN_RST_OQ 0x11F0
|
||||
|
||||
/* 1 register (32-bit) to determine whether Input queues are in reset. */
|
||||
#define CN6XXX_SLI_PORT_IN_RST_IQ 0x11F4
|
||||
|
||||
/*###################### REQUEST QUEUE #########################*/
|
||||
|
||||
/* 1 register (32-bit) - instr. size of each input queue. */
|
||||
#define CN6XXX_SLI_PKT_INSTR_SIZE 0x1020
|
||||
|
||||
/* 32 registers for Input Queue Instr Count - SLI_PKT_IN_DONE0_CNTS */
|
||||
#define CN6XXX_SLI_IQ_INSTR_COUNT_START 0x2000
|
||||
|
||||
/* 32 registers for Input Queue Start Addr - SLI_PKT0_INSTR_BADDR */
|
||||
#define CN6XXX_SLI_IQ_BASE_ADDR_START64 0x2800
|
||||
|
||||
/* 32 registers for Input Doorbell - SLI_PKT0_INSTR_BAOFF_DBELL */
|
||||
#define CN6XXX_SLI_IQ_DOORBELL_START 0x2C00
|
||||
|
||||
/* 32 registers for Input Queue size - SLI_PKT0_INSTR_FIFO_RSIZE */
|
||||
#define CN6XXX_SLI_IQ_SIZE_START 0x3000
|
||||
|
||||
/* 32 registers for Instruction Header Options - SLI_PKT0_INSTR_HEADER */
|
||||
#define CN6XXX_SLI_IQ_PKT_INSTR_HDR_START64 0x3400
|
||||
|
||||
/* 1 register (64-bit) - Back Pressure for each input queue - SLI_PKT0_IN_BP */
|
||||
#define CN66XX_SLI_INPUT_BP_START64 0x3800
|
||||
|
||||
/* Each Input Queue register is at a 16-byte Offset in BAR0 */
|
||||
#define CN6XXX_IQ_OFFSET 0x10
|
||||
|
||||
/* 1 register (32-bit) - ES, RO, NS, Arbitration for Input Queue Data &
|
||||
* gather list fetches. SLI_PKT_INPUT_CONTROL.
|
||||
*/
|
||||
#define CN6XXX_SLI_PKT_INPUT_CONTROL 0x1170
|
||||
|
||||
/* 1 register (64-bit) - Number of instructions to read at one time
|
||||
* - 2 bits for each input ring. SLI_PKT_INSTR_RD_SIZE.
|
||||
*/
|
||||
#define CN6XXX_SLI_PKT_INSTR_RD_SIZE 0x11A0
|
||||
|
||||
/* 1 register (64-bit) - Assign Input ring to MAC port
|
||||
* - 2 bits for each input ring. SLI_PKT_IN_PCIE_PORT.
|
||||
*/
|
||||
#define CN6XXX_SLI_IN_PCIE_PORT 0x11B0
|
||||
|
||||
/*------- Request Queue Macros ---------*/
|
||||
#define CN6XXX_SLI_IQ_BASE_ADDR64(iq) \
|
||||
(CN6XXX_SLI_IQ_BASE_ADDR_START64 + ((iq) * CN6XXX_IQ_OFFSET))
|
||||
|
||||
#define CN6XXX_SLI_IQ_SIZE(iq) \
|
||||
(CN6XXX_SLI_IQ_SIZE_START + ((iq) * CN6XXX_IQ_OFFSET))
|
||||
|
||||
#define CN6XXX_SLI_IQ_PKT_INSTR_HDR64(iq) \
|
||||
(CN6XXX_SLI_IQ_PKT_INSTR_HDR_START64 + ((iq) * CN6XXX_IQ_OFFSET))
|
||||
|
||||
#define CN6XXX_SLI_IQ_DOORBELL(iq) \
|
||||
(CN6XXX_SLI_IQ_DOORBELL_START + ((iq) * CN6XXX_IQ_OFFSET))
|
||||
|
||||
#define CN6XXX_SLI_IQ_INSTR_COUNT(iq) \
|
||||
(CN6XXX_SLI_IQ_INSTR_COUNT_START + ((iq) * CN6XXX_IQ_OFFSET))
|
||||
|
||||
#define CN66XX_SLI_IQ_BP64(iq) \
|
||||
(CN66XX_SLI_INPUT_BP_START64 + ((iq) * CN6XXX_IQ_OFFSET))
|
||||
|
||||
/*------------------ Masks ----------------*/
|
||||
#define CN6XXX_INPUT_CTL_ROUND_ROBIN_ARB BIT(22)
|
||||
#define CN6XXX_INPUT_CTL_DATA_NS BIT(8)
|
||||
#define CN6XXX_INPUT_CTL_DATA_ES_64B_SWAP BIT(6)
|
||||
#define CN6XXX_INPUT_CTL_DATA_RO BIT(5)
|
||||
#define CN6XXX_INPUT_CTL_USE_CSR BIT(4)
|
||||
#define CN6XXX_INPUT_CTL_GATHER_NS BIT(3)
|
||||
#define CN6XXX_INPUT_CTL_GATHER_ES_64B_SWAP BIT(2)
|
||||
#define CN6XXX_INPUT_CTL_GATHER_RO BIT(1)
|
||||
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
#define CN6XXX_INPUT_CTL_MASK \
|
||||
(CN6XXX_INPUT_CTL_DATA_ES_64B_SWAP \
|
||||
| CN6XXX_INPUT_CTL_USE_CSR \
|
||||
| CN6XXX_INPUT_CTL_GATHER_ES_64B_SWAP)
|
||||
#else
|
||||
#define CN6XXX_INPUT_CTL_MASK \
|
||||
(CN6XXX_INPUT_CTL_DATA_ES_64B_SWAP \
|
||||
| CN6XXX_INPUT_CTL_USE_CSR)
|
||||
#endif
|
||||
|
||||
/*############################ OUTPUT QUEUE #########################*/
|
||||
|
||||
/* 32 registers for Output queue buffer and info size - SLI_PKT0_OUT_SIZE */
|
||||
#define CN6XXX_SLI_OQ0_BUFF_INFO_SIZE 0x0C00
|
||||
|
||||
/* 32 registers for Output Queue Start Addr - SLI_PKT0_SLIST_BADDR */
|
||||
#define CN6XXX_SLI_OQ_BASE_ADDR_START64 0x1400
|
||||
|
||||
/* 32 registers for Output Queue Packet Credits - SLI_PKT0_SLIST_BAOFF_DBELL */
|
||||
#define CN6XXX_SLI_OQ_PKT_CREDITS_START 0x1800
|
||||
|
||||
/* 32 registers for Output Queue size - SLI_PKT0_SLIST_FIFO_RSIZE */
|
||||
#define CN6XXX_SLI_OQ_SIZE_START 0x1C00
|
||||
|
||||
/* 32 registers for Output Queue Packet Count - SLI_PKT0_CNTS */
|
||||
#define CN6XXX_SLI_OQ_PKT_SENT_START 0x2400
|
||||
|
||||
/* Each Output Queue register is at a 16-byte Offset in BAR0 */
|
||||
#define CN6XXX_OQ_OFFSET 0x10
|
||||
|
||||
/* 1 register (32-bit) - 1 bit for each output queue
|
||||
* - Relaxed Ordering setting for reading Output Queues descriptors
|
||||
* - SLI_PKT_SLIST_ROR
|
||||
*/
|
||||
#define CN6XXX_SLI_PKT_SLIST_ROR 0x1030
|
||||
|
||||
/* 1 register (32-bit) - 1 bit for each output queue
|
||||
* - No Snoop mode for reading Output Queues descriptors
|
||||
* - SLI_PKT_SLIST_NS
|
||||
*/
|
||||
#define CN6XXX_SLI_PKT_SLIST_NS 0x1040
|
||||
|
||||
/* 1 register (64-bit) - 2 bits for each output queue
|
||||
* - Endian-Swap mode for reading Output Queue descriptors
|
||||
* - SLI_PKT_SLIST_ES
|
||||
*/
|
||||
#define CN6XXX_SLI_PKT_SLIST_ES64 0x1050
|
||||
|
||||
/* 1 register (32-bit) - 1 bit for each output queue
|
||||
* - InfoPtr mode for Output Queues.
|
||||
* - SLI_PKT_IPTR
|
||||
*/
|
||||
#define CN6XXX_SLI_PKT_IPTR 0x1070
|
||||
|
||||
/* 1 register (32-bit) - 1 bit for each output queue
|
||||
* - DPTR format selector for Output queues.
|
||||
* - SLI_PKT_DPADDR
|
||||
*/
|
||||
#define CN6XXX_SLI_PKT_DPADDR 0x1080
|
||||
|
||||
/* 1 register (32-bit) - 1 bit for each output queue
|
||||
* - Relaxed Ordering setting for reading Output Queues data
|
||||
* - SLI_PKT_DATA_OUT_ROR
|
||||
*/
|
||||
#define CN6XXX_SLI_PKT_DATA_OUT_ROR 0x1090
|
||||
|
||||
/* 1 register (32-bit) - 1 bit for each output queue
|
||||
* - No Snoop mode for reading Output Queues data
|
||||
* - SLI_PKT_DATA_OUT_NS
|
||||
*/
|
||||
#define CN6XXX_SLI_PKT_DATA_OUT_NS 0x10A0
|
||||
|
||||
/* 1 register (64-bit) - 2 bits for each output queue
|
||||
* - Endian-Swap mode for reading Output Queue data
|
||||
* - SLI_PKT_DATA_OUT_ES
|
||||
*/
|
||||
#define CN6XXX_SLI_PKT_DATA_OUT_ES64 0x10B0
|
||||
|
||||
/* 1 register (32-bit) - 1 bit for each output queue
|
||||
* - Controls whether SLI_PKTn_CNTS is incremented for bytes or for packets.
|
||||
* - SLI_PKT_OUT_BMODE
|
||||
*/
|
||||
#define CN6XXX_SLI_PKT_OUT_BMODE 0x10D0
|
||||
|
||||
/* 1 register (64-bit) - 2 bits for each output queue
|
||||
* - Assign PCIE port for Output queues
|
||||
* - SLI_PKT_PCIE_PORT.
|
||||
*/
|
||||
#define CN6XXX_SLI_PKT_PCIE_PORT64 0x10E0
|
||||
|
||||
/* 1 (64-bit) register for Output Queue Packet Count Interrupt Threshold
|
||||
* & Time Threshold. The same setting applies to all 32 queues.
|
||||
* The register is defined as a 64-bit registers, but we use the
|
||||
* 32-bit offsets to define distinct addresses.
|
||||
*/
|
||||
#define CN6XXX_SLI_OQ_INT_LEVEL_PKTS 0x1120
|
||||
#define CN6XXX_SLI_OQ_INT_LEVEL_TIME 0x1124
|
||||
|
||||
/* 1 (64-bit register) for Output Queue backpressure across all rings. */
|
||||
#define CN6XXX_SLI_OQ_WMARK 0x1180
|
||||
|
||||
/* 1 register to control output queue global backpressure & ring enable. */
|
||||
#define CN6XXX_SLI_PKT_CTL 0x1220
|
||||
|
||||
/*------- Output Queue Macros ---------*/
|
||||
#define CN6XXX_SLI_OQ_BASE_ADDR64(oq) \
|
||||
(CN6XXX_SLI_OQ_BASE_ADDR_START64 + ((oq) * CN6XXX_OQ_OFFSET))
|
||||
|
||||
#define CN6XXX_SLI_OQ_SIZE(oq) \
|
||||
(CN6XXX_SLI_OQ_SIZE_START + ((oq) * CN6XXX_OQ_OFFSET))
|
||||
|
||||
#define CN6XXX_SLI_OQ_BUFF_INFO_SIZE(oq) \
|
||||
(CN6XXX_SLI_OQ0_BUFF_INFO_SIZE + ((oq) * CN6XXX_OQ_OFFSET))
|
||||
|
||||
#define CN6XXX_SLI_OQ_PKTS_SENT(oq) \
|
||||
(CN6XXX_SLI_OQ_PKT_SENT_START + ((oq) * CN6XXX_OQ_OFFSET))
|
||||
|
||||
#define CN6XXX_SLI_OQ_PKTS_CREDIT(oq) \
|
||||
(CN6XXX_SLI_OQ_PKT_CREDITS_START + ((oq) * CN6XXX_OQ_OFFSET))
|
||||
|
||||
/*######################### DMA Counters #########################*/
|
||||
|
||||
/* 2 registers (64-bit) - DMA Count - 1 for each DMA counter 0/1. */
|
||||
#define CN6XXX_DMA_CNT_START 0x0400
|
||||
|
||||
/* 2 registers (64-bit) - DMA Timer 0/1, contains DMA timer values
|
||||
* SLI_DMA_0_TIM
|
||||
*/
|
||||
#define CN6XXX_DMA_TIM_START 0x0420
|
||||
|
||||
/* 2 registers (64-bit) - DMA count & Time Interrupt threshold -
|
||||
* SLI_DMA_0_INT_LEVEL
|
||||
*/
|
||||
#define CN6XXX_DMA_INT_LEVEL_START 0x03E0
|
||||
|
||||
/* Each DMA register is at a 16-byte Offset in BAR0 */
|
||||
#define CN6XXX_DMA_OFFSET 0x10
|
||||
|
||||
/*---------- DMA Counter Macros ---------*/
|
||||
#define CN6XXX_DMA_CNT(dq) \
|
||||
(CN6XXX_DMA_CNT_START + ((dq) * CN6XXX_DMA_OFFSET))
|
||||
|
||||
#define CN6XXX_DMA_INT_LEVEL(dq) \
|
||||
(CN6XXX_DMA_INT_LEVEL_START + ((dq) * CN6XXX_DMA_OFFSET))
|
||||
|
||||
#define CN6XXX_DMA_PKT_INT_LEVEL(dq) \
|
||||
(CN6XXX_DMA_INT_LEVEL_START + ((dq) * CN6XXX_DMA_OFFSET))
|
||||
|
||||
#define CN6XXX_DMA_TIME_INT_LEVEL(dq) \
|
||||
(CN6XXX_DMA_INT_LEVEL_START + 4 + ((dq) * CN6XXX_DMA_OFFSET))
|
||||
|
||||
#define CN6XXX_DMA_TIM(dq) \
|
||||
(CN6XXX_DMA_TIM_START + ((dq) * CN6XXX_DMA_OFFSET))
|
||||
|
||||
/*######################## INTERRUPTS #########################*/
|
||||
|
||||
/* 1 register (64-bit) for Interrupt Summary */
|
||||
#define CN6XXX_SLI_INT_SUM64 0x0330
|
||||
|
||||
/* 1 register (64-bit) for Interrupt Enable */
|
||||
#define CN6XXX_SLI_INT_ENB64_PORT0 0x0340
|
||||
#define CN6XXX_SLI_INT_ENB64_PORT1 0x0350
|
||||
|
||||
/* 1 register (32-bit) to enable Output Queue Packet/Byte Count Interrupt */
|
||||
#define CN6XXX_SLI_PKT_CNT_INT_ENB 0x1150
|
||||
|
||||
/* 1 register (32-bit) to enable Output Queue Packet Timer Interrupt */
|
||||
#define CN6XXX_SLI_PKT_TIME_INT_ENB 0x1160
|
||||
|
||||
/* 1 register (32-bit) to indicate which Output Queue reached pkt threshold */
|
||||
#define CN6XXX_SLI_PKT_CNT_INT 0x1130
|
||||
|
||||
/* 1 register (32-bit) to indicate which Output Queue reached time threshold */
|
||||
#define CN6XXX_SLI_PKT_TIME_INT 0x1140
|
||||
|
||||
/*------------------ Interrupt Masks ----------------*/
|
||||
|
||||
#define CN6XXX_INTR_RML_TIMEOUT_ERR BIT(1)
|
||||
#define CN6XXX_INTR_BAR0_RW_TIMEOUT_ERR BIT(2)
|
||||
#define CN6XXX_INTR_IO2BIG_ERR BIT(3)
|
||||
#define CN6XXX_INTR_PKT_COUNT BIT(4)
|
||||
#define CN6XXX_INTR_PKT_TIME BIT(5)
|
||||
#define CN6XXX_INTR_M0UPB0_ERR BIT(8)
|
||||
#define CN6XXX_INTR_M0UPWI_ERR BIT(9)
|
||||
#define CN6XXX_INTR_M0UNB0_ERR BIT(10)
|
||||
#define CN6XXX_INTR_M0UNWI_ERR BIT(11)
|
||||
#define CN6XXX_INTR_M1UPB0_ERR BIT(12)
|
||||
#define CN6XXX_INTR_M1UPWI_ERR BIT(13)
|
||||
#define CN6XXX_INTR_M1UNB0_ERR BIT(14)
|
||||
#define CN6XXX_INTR_M1UNWI_ERR BIT(15)
|
||||
#define CN6XXX_INTR_MIO_INT0 BIT(16)
|
||||
#define CN6XXX_INTR_MIO_INT1 BIT(17)
|
||||
#define CN6XXX_INTR_MAC_INT0 BIT(18)
|
||||
#define CN6XXX_INTR_MAC_INT1 BIT(19)
|
||||
|
||||
#define CN6XXX_INTR_DMA0_FORCE BIT_ULL(32)
|
||||
#define CN6XXX_INTR_DMA1_FORCE BIT_ULL(33)
|
||||
#define CN6XXX_INTR_DMA0_COUNT BIT_ULL(34)
|
||||
#define CN6XXX_INTR_DMA1_COUNT BIT_ULL(35)
|
||||
#define CN6XXX_INTR_DMA0_TIME BIT_ULL(36)
|
||||
#define CN6XXX_INTR_DMA1_TIME BIT_ULL(37)
|
||||
#define CN6XXX_INTR_INSTR_DB_OF_ERR BIT_ULL(48)
|
||||
#define CN6XXX_INTR_SLIST_DB_OF_ERR BIT_ULL(49)
|
||||
#define CN6XXX_INTR_POUT_ERR BIT_ULL(50)
|
||||
#define CN6XXX_INTR_PIN_BP_ERR BIT_ULL(51)
|
||||
#define CN6XXX_INTR_PGL_ERR BIT_ULL(52)
|
||||
#define CN6XXX_INTR_PDI_ERR BIT_ULL(53)
|
||||
#define CN6XXX_INTR_POP_ERR BIT_ULL(54)
|
||||
#define CN6XXX_INTR_PINS_ERR BIT_ULL(55)
|
||||
#define CN6XXX_INTR_SPRT0_ERR BIT_ULL(56)
|
||||
#define CN6XXX_INTR_SPRT1_ERR BIT_ULL(57)
|
||||
#define CN6XXX_INTR_ILL_PAD_ERR BIT_ULL(60)
|
||||
|
||||
#define CN6XXX_INTR_DMA0_DATA (CN6XXX_INTR_DMA0_TIME)
|
||||
|
||||
#define CN6XXX_INTR_DMA1_DATA (CN6XXX_INTR_DMA1_TIME)
|
||||
|
||||
#define CN6XXX_INTR_DMA_DATA \
|
||||
(CN6XXX_INTR_DMA0_DATA | CN6XXX_INTR_DMA1_DATA)
|
||||
|
||||
#define CN6XXX_INTR_PKT_DATA (CN6XXX_INTR_PKT_TIME | \
|
||||
CN6XXX_INTR_PKT_COUNT)
|
||||
|
||||
/* Sum of interrupts for all PCI-Express Data Interrupts */
|
||||
#define CN6XXX_INTR_PCIE_DATA \
|
||||
(CN6XXX_INTR_DMA_DATA | CN6XXX_INTR_PKT_DATA)
|
||||
|
||||
#define CN6XXX_INTR_MIO \
|
||||
(CN6XXX_INTR_MIO_INT0 | CN6XXX_INTR_MIO_INT1)
|
||||
|
||||
#define CN6XXX_INTR_MAC \
|
||||
(CN6XXX_INTR_MAC_INT0 | CN6XXX_INTR_MAC_INT1)
|
||||
|
||||
/* Sum of interrupts for error events */
|
||||
#define CN6XXX_INTR_ERR \
|
||||
(CN6XXX_INTR_BAR0_RW_TIMEOUT_ERR \
|
||||
| CN6XXX_INTR_IO2BIG_ERR \
|
||||
| CN6XXX_INTR_M0UPB0_ERR \
|
||||
| CN6XXX_INTR_M0UPWI_ERR \
|
||||
| CN6XXX_INTR_M0UNB0_ERR \
|
||||
| CN6XXX_INTR_M0UNWI_ERR \
|
||||
| CN6XXX_INTR_M1UPB0_ERR \
|
||||
| CN6XXX_INTR_M1UPWI_ERR \
|
||||
| CN6XXX_INTR_M1UPB0_ERR \
|
||||
| CN6XXX_INTR_M1UNWI_ERR \
|
||||
| CN6XXX_INTR_INSTR_DB_OF_ERR \
|
||||
| CN6XXX_INTR_SLIST_DB_OF_ERR \
|
||||
| CN6XXX_INTR_POUT_ERR \
|
||||
| CN6XXX_INTR_PIN_BP_ERR \
|
||||
| CN6XXX_INTR_PGL_ERR \
|
||||
| CN6XXX_INTR_PDI_ERR \
|
||||
| CN6XXX_INTR_POP_ERR \
|
||||
| CN6XXX_INTR_PINS_ERR \
|
||||
| CN6XXX_INTR_SPRT0_ERR \
|
||||
| CN6XXX_INTR_SPRT1_ERR \
|
||||
| CN6XXX_INTR_ILL_PAD_ERR)
|
||||
|
||||
/* Programmed Mask for Interrupt Sum */
|
||||
#define CN6XXX_INTR_MASK \
|
||||
(CN6XXX_INTR_PCIE_DATA \
|
||||
| CN6XXX_INTR_DMA0_FORCE \
|
||||
| CN6XXX_INTR_DMA1_FORCE \
|
||||
| CN6XXX_INTR_MIO \
|
||||
| CN6XXX_INTR_MAC \
|
||||
| CN6XXX_INTR_ERR)
|
||||
|
||||
#define CN6XXX_SLI_S2M_PORT0_CTL 0x3D80
|
||||
#define CN6XXX_SLI_S2M_PORT1_CTL 0x3D90
|
||||
#define CN6XXX_SLI_S2M_PORTX_CTL(port) \
|
||||
(CN6XXX_SLI_S2M_PORT0_CTL + (port * 0x10))
|
||||
|
||||
#define CN6XXX_SLI_INT_ENB64(port) \
|
||||
(CN6XXX_SLI_INT_ENB64_PORT0 + (port * 0x10))
|
||||
|
||||
#define CN6XXX_SLI_MAC_NUMBER 0x3E00
|
||||
|
||||
/* CN6XXX BAR1 Index registers. */
|
||||
#define CN6XXX_PEM_BAR1_INDEX000 0x00011800C00000A8ULL
|
||||
#define CN6XXX_PEM_OFFSET 0x0000000001000000ULL
|
||||
|
||||
#define CN6XXX_BAR1_INDEX_START CN6XXX_PEM_BAR1_INDEX000
|
||||
#define CN6XXX_PCI_BAR1_OFFSET 0x8
|
||||
|
||||
#define CN6XXX_BAR1_REG(idx, port) \
|
||||
(CN6XXX_BAR1_INDEX_START + (port * CN6XXX_PEM_OFFSET) + \
|
||||
(CN6XXX_PCI_BAR1_OFFSET * (idx)))
|
||||
|
||||
/*############################ DPI #########################*/
|
||||
|
||||
#define CN6XXX_DPI_CTL 0x0001df0000000040ULL
|
||||
|
||||
#define CN6XXX_DPI_DMA_CONTROL 0x0001df0000000048ULL
|
||||
|
||||
#define CN6XXX_DPI_REQ_GBL_ENB 0x0001df0000000050ULL
|
||||
|
||||
#define CN6XXX_DPI_REQ_ERR_RSP 0x0001df0000000058ULL
|
||||
|
||||
#define CN6XXX_DPI_REQ_ERR_RST 0x0001df0000000060ULL
|
||||
|
||||
#define CN6XXX_DPI_DMA_ENG0_ENB 0x0001df0000000080ULL
|
||||
|
||||
#define CN6XXX_DPI_DMA_ENG_ENB(q_no) \
|
||||
(CN6XXX_DPI_DMA_ENG0_ENB + (q_no * 8))
|
||||
|
||||
#define CN6XXX_DPI_DMA_ENG0_BUF 0x0001df0000000880ULL
|
||||
|
||||
#define CN6XXX_DPI_DMA_ENG_BUF(q_no) \
|
||||
(CN6XXX_DPI_DMA_ENG0_BUF + (q_no * 8))
|
||||
|
||||
#define CN6XXX_DPI_SLI_PRT0_CFG 0x0001df0000000900ULL
|
||||
#define CN6XXX_DPI_SLI_PRT1_CFG 0x0001df0000000908ULL
|
||||
#define CN6XXX_DPI_SLI_PRTX_CFG(port) \
|
||||
(CN6XXX_DPI_SLI_PRT0_CFG + (port * 0x10))
|
||||
|
||||
#define CN6XXX_DPI_DMA_COMMIT_MODE BIT_ULL(58)
|
||||
#define CN6XXX_DPI_DMA_PKT_HP BIT_ULL(57)
|
||||
#define CN6XXX_DPI_DMA_PKT_EN BIT_ULL(56)
|
||||
#define CN6XXX_DPI_DMA_O_ES BIT_ULL(15)
|
||||
#define CN6XXX_DPI_DMA_O_MODE BIT_ULL(14)
|
||||
|
||||
#define CN6XXX_DPI_DMA_CTL_MASK \
|
||||
(CN6XXX_DPI_DMA_COMMIT_MODE | \
|
||||
CN6XXX_DPI_DMA_PKT_HP | \
|
||||
CN6XXX_DPI_DMA_PKT_EN | \
|
||||
CN6XXX_DPI_DMA_O_ES | \
|
||||
CN6XXX_DPI_DMA_O_MODE)
|
||||
|
||||
/*############################ CIU #########################*/
|
||||
|
||||
#define CN6XXX_CIU_SOFT_BIST 0x0001070000000738ULL
|
||||
#define CN6XXX_CIU_SOFT_RST 0x0001070000000740ULL
|
||||
|
||||
/*############################ MIO #########################*/
|
||||
#define CN6XXX_MIO_PTP_CLOCK_CFG 0x0001070000000f00ULL
|
||||
#define CN6XXX_MIO_PTP_CLOCK_LO 0x0001070000000f08ULL
|
||||
#define CN6XXX_MIO_PTP_CLOCK_HI 0x0001070000000f10ULL
|
||||
#define CN6XXX_MIO_PTP_CLOCK_COMP 0x0001070000000f18ULL
|
||||
#define CN6XXX_MIO_PTP_TIMESTAMP 0x0001070000000f20ULL
|
||||
#define CN6XXX_MIO_PTP_EVT_CNT 0x0001070000000f28ULL
|
||||
#define CN6XXX_MIO_PTP_CKOUT_THRESH_LO 0x0001070000000f30ULL
|
||||
#define CN6XXX_MIO_PTP_CKOUT_THRESH_HI 0x0001070000000f38ULL
|
||||
#define CN6XXX_MIO_PTP_CKOUT_HI_INCR 0x0001070000000f40ULL
|
||||
#define CN6XXX_MIO_PTP_CKOUT_LO_INCR 0x0001070000000f48ULL
|
||||
#define CN6XXX_MIO_PTP_PPS_THRESH_LO 0x0001070000000f50ULL
|
||||
#define CN6XXX_MIO_PTP_PPS_THRESH_HI 0x0001070000000f58ULL
|
||||
#define CN6XXX_MIO_PTP_PPS_HI_INCR 0x0001070000000f60ULL
|
||||
#define CN6XXX_MIO_PTP_PPS_LO_INCR 0x0001070000000f68ULL
|
||||
|
||||
#define CN6XXX_MIO_QLM4_CFG 0x00011800000015B0ULL
|
||||
#define CN6XXX_MIO_RST_BOOT 0x0001180000001600ULL
|
||||
|
||||
#define CN6XXX_MIO_QLM_CFG_MASK 0x7
|
||||
|
||||
/*############################ LMC #########################*/
|
||||
|
||||
#define CN6XXX_LMC0_RESET_CTL 0x0001180088000180ULL
|
||||
#define CN6XXX_LMC0_RESET_CTL_DDR3RST_MASK 0x0000000000000001ULL
|
||||
|
||||
#endif
|
198
drivers/net/ethernet/cavium/liquidio/cn68xx_device.c
Normal file
198
drivers/net/ethernet/cavium/liquidio/cn68xx_device.c
Normal file
|
@ -0,0 +1,198 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
#include <linux/version.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include "octeon_config.h"
|
||||
#include "liquidio_common.h"
|
||||
#include "octeon_droq.h"
|
||||
#include "octeon_iq.h"
|
||||
#include "response_manager.h"
|
||||
#include "octeon_device.h"
|
||||
#include "octeon_nic.h"
|
||||
#include "octeon_main.h"
|
||||
#include "octeon_network.h"
|
||||
#include "cn66xx_regs.h"
|
||||
#include "cn66xx_device.h"
|
||||
#include "cn68xx_regs.h"
|
||||
#include "cn68xx_device.h"
|
||||
#include "liquidio_image.h"
|
||||
#include "octeon_mem_ops.h"
|
||||
|
||||
static void lio_cn68xx_set_dpi_regs(struct octeon_device *oct)
|
||||
{
|
||||
u32 i;
|
||||
u32 fifo_sizes[6] = { 3, 3, 1, 1, 1, 8 };
|
||||
|
||||
lio_pci_writeq(oct, CN6XXX_DPI_DMA_CTL_MASK, CN6XXX_DPI_DMA_CONTROL);
|
||||
dev_dbg(&oct->pci_dev->dev, "DPI_DMA_CONTROL: 0x%016llx\n",
|
||||
lio_pci_readq(oct, CN6XXX_DPI_DMA_CONTROL));
|
||||
|
||||
for (i = 0; i < 6; i++) {
|
||||
/* Prevent service of instruction queue for all DMA engines
|
||||
* Engine 5 will remain 0. Engines 0 - 4 will be setup by
|
||||
* core.
|
||||
*/
|
||||
lio_pci_writeq(oct, 0, CN6XXX_DPI_DMA_ENG_ENB(i));
|
||||
lio_pci_writeq(oct, fifo_sizes[i], CN6XXX_DPI_DMA_ENG_BUF(i));
|
||||
dev_dbg(&oct->pci_dev->dev, "DPI_ENG_BUF%d: 0x%016llx\n", i,
|
||||
lio_pci_readq(oct, CN6XXX_DPI_DMA_ENG_BUF(i)));
|
||||
}
|
||||
|
||||
/* DPI_SLI_PRT_CFG has MPS and MRRS settings that will be set
|
||||
* separately.
|
||||
*/
|
||||
|
||||
lio_pci_writeq(oct, 1, CN6XXX_DPI_CTL);
|
||||
dev_dbg(&oct->pci_dev->dev, "DPI_CTL: 0x%016llx\n",
|
||||
lio_pci_readq(oct, CN6XXX_DPI_CTL));
|
||||
}
|
||||
|
||||
static int lio_cn68xx_soft_reset(struct octeon_device *oct)
|
||||
{
|
||||
lio_cn6xxx_soft_reset(oct);
|
||||
lio_cn68xx_set_dpi_regs(oct);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void lio_cn68xx_setup_pkt_ctl_regs(struct octeon_device *oct)
|
||||
{
|
||||
struct octeon_cn6xxx *cn68xx = (struct octeon_cn6xxx *)oct->chip;
|
||||
u64 pktctl, tx_pipe, max_oqs;
|
||||
|
||||
pktctl = octeon_read_csr64(oct, CN6XXX_SLI_PKT_CTL);
|
||||
|
||||
/* 68XX specific */
|
||||
max_oqs = CFG_GET_OQ_MAX_Q(CHIP_FIELD(oct, cn6xxx, conf));
|
||||
tx_pipe = octeon_read_csr64(oct, CN68XX_SLI_TX_PIPE);
|
||||
tx_pipe &= 0xffffffffff00ffffULL; /* clear out NUMP field */
|
||||
tx_pipe |= max_oqs << 16; /* put max_oqs in NUMP field */
|
||||
octeon_write_csr64(oct, CN68XX_SLI_TX_PIPE, tx_pipe);
|
||||
|
||||
if (CFG_GET_IS_SLI_BP_ON(cn68xx->conf))
|
||||
pktctl |= 0xF;
|
||||
else
|
||||
/* Disable per-port backpressure. */
|
||||
pktctl &= ~0xF;
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_PKT_CTL, pktctl);
|
||||
}
|
||||
|
||||
static int lio_cn68xx_setup_device_regs(struct octeon_device *oct)
|
||||
{
|
||||
lio_cn6xxx_setup_pcie_mps(oct, PCIE_MPS_DEFAULT);
|
||||
lio_cn6xxx_setup_pcie_mrrs(oct, PCIE_MRRS_256B);
|
||||
lio_cn6xxx_enable_error_reporting(oct);
|
||||
|
||||
lio_cn6xxx_setup_global_input_regs(oct);
|
||||
lio_cn68xx_setup_pkt_ctl_regs(oct);
|
||||
lio_cn6xxx_setup_global_output_regs(oct);
|
||||
|
||||
/* Default error timeout value should be 0x200000 to avoid host hang
|
||||
* when reads invalid register
|
||||
*/
|
||||
octeon_write_csr64(oct, CN6XXX_SLI_WINDOW_CTL, 0x200000ULL);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void lio_cn68xx_vendor_message_fix(struct octeon_device *oct)
|
||||
{
|
||||
u32 val = 0;
|
||||
|
||||
/* Set M_VEND1_DRP and M_VEND0_DRP bits */
|
||||
pci_read_config_dword(oct->pci_dev, CN6XXX_PCIE_FLTMSK, &val);
|
||||
val |= 0x3;
|
||||
pci_write_config_dword(oct->pci_dev, CN6XXX_PCIE_FLTMSK, val);
|
||||
}
|
||||
|
||||
int lio_is_210nv(struct octeon_device *oct)
|
||||
{
|
||||
u64 mio_qlm4_cfg = lio_pci_readq(oct, CN6XXX_MIO_QLM4_CFG);
|
||||
|
||||
return ((mio_qlm4_cfg & CN6XXX_MIO_QLM_CFG_MASK) == 0);
|
||||
}
|
||||
|
||||
int lio_setup_cn68xx_octeon_device(struct octeon_device *oct)
|
||||
{
|
||||
struct octeon_cn6xxx *cn68xx = (struct octeon_cn6xxx *)oct->chip;
|
||||
u16 card_type = LIO_410NV;
|
||||
|
||||
if (octeon_map_pci_barx(oct, 0, 0))
|
||||
return 1;
|
||||
|
||||
if (octeon_map_pci_barx(oct, 1, MAX_BAR1_IOREMAP_SIZE)) {
|
||||
dev_err(&oct->pci_dev->dev, "%s CN68XX BAR1 map failed\n",
|
||||
__func__);
|
||||
octeon_unmap_pci_barx(oct, 0);
|
||||
return 1;
|
||||
}
|
||||
|
||||
spin_lock_init(&cn68xx->lock_for_droq_int_enb_reg);
|
||||
|
||||
oct->fn_list.setup_iq_regs = lio_cn6xxx_setup_iq_regs;
|
||||
oct->fn_list.setup_oq_regs = lio_cn6xxx_setup_oq_regs;
|
||||
|
||||
oct->fn_list.process_interrupt_regs = lio_cn6xxx_process_interrupt_regs;
|
||||
oct->fn_list.soft_reset = lio_cn68xx_soft_reset;
|
||||
oct->fn_list.setup_device_regs = lio_cn68xx_setup_device_regs;
|
||||
oct->fn_list.reinit_regs = lio_cn6xxx_reinit_regs;
|
||||
oct->fn_list.update_iq_read_idx = lio_cn6xxx_update_read_index;
|
||||
|
||||
oct->fn_list.bar1_idx_setup = lio_cn6xxx_bar1_idx_setup;
|
||||
oct->fn_list.bar1_idx_write = lio_cn6xxx_bar1_idx_write;
|
||||
oct->fn_list.bar1_idx_read = lio_cn6xxx_bar1_idx_read;
|
||||
|
||||
oct->fn_list.enable_interrupt = lio_cn6xxx_enable_interrupt;
|
||||
oct->fn_list.disable_interrupt = lio_cn6xxx_disable_interrupt;
|
||||
|
||||
oct->fn_list.enable_io_queues = lio_cn6xxx_enable_io_queues;
|
||||
oct->fn_list.disable_io_queues = lio_cn6xxx_disable_io_queues;
|
||||
|
||||
lio_cn6xxx_setup_reg_address(oct, oct->chip, &oct->reg_list);
|
||||
|
||||
/* Determine variant of card */
|
||||
if (lio_is_210nv(oct))
|
||||
card_type = LIO_210NV;
|
||||
|
||||
cn68xx->conf = (struct octeon_config *)
|
||||
oct_get_config_info(oct, card_type);
|
||||
if (!cn68xx->conf) {
|
||||
dev_err(&oct->pci_dev->dev, "%s No Config found for CN68XX %s\n",
|
||||
__func__,
|
||||
(card_type == LIO_410NV) ? LIO_410NV_NAME :
|
||||
LIO_210NV_NAME);
|
||||
octeon_unmap_pci_barx(oct, 0);
|
||||
octeon_unmap_pci_barx(oct, 1);
|
||||
return 1;
|
||||
}
|
||||
|
||||
oct->coproc_clock_rate = 1000000ULL * lio_cn6xxx_coprocessor_clock(oct);
|
||||
|
||||
lio_cn68xx_vendor_message_fix(oct);
|
||||
|
||||
return 0;
|
||||
}
|
33
drivers/net/ethernet/cavium/liquidio/cn68xx_device.h
Normal file
33
drivers/net/ethernet/cavium/liquidio/cn68xx_device.h
Normal file
|
@ -0,0 +1,33 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
|
||||
/*! \file cn68xx_device.h
|
||||
* \brief Host Driver: Routines that perform CN68XX specific operations.
|
||||
*/
|
||||
|
||||
#ifndef __CN68XX_DEVICE_H__
|
||||
#define __CN68XX_DEVICE_H__
|
||||
|
||||
int lio_setup_cn68xx_octeon_device(struct octeon_device *oct);
|
||||
int lio_is_210nv(struct octeon_device *oct);
|
||||
|
||||
#endif
|
51
drivers/net/ethernet/cavium/liquidio/cn68xx_regs.h
Normal file
51
drivers/net/ethernet/cavium/liquidio/cn68xx_regs.h
Normal file
|
@ -0,0 +1,51 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
|
||||
/*! \file cn68xx_regs.h
|
||||
* \brief Host Driver: Register Address and Register Mask values for
|
||||
* Octeon CN68XX devices. The register map for CN66XX is the same
|
||||
* for most registers. This file has the other registers that are
|
||||
* 68XX-specific.
|
||||
*/
|
||||
|
||||
#ifndef __CN68XX_REGS_H__
|
||||
#define __CN68XX_REGS_H__
|
||||
#include "cn66xx_regs.h"
|
||||
|
||||
/*###################### REQUEST QUEUE #########################*/
|
||||
|
||||
#define CN68XX_SLI_IQ_PORT0_PKIND 0x0800
|
||||
|
||||
#define CN68XX_SLI_IQ_PORT_PKIND(iq) \
|
||||
(CN68XX_SLI_IQ_PORT0_PKIND + ((iq) * CN6XXX_IQ_OFFSET))
|
||||
|
||||
/*############################ OUTPUT QUEUE #########################*/
|
||||
|
||||
/* Starting pipe number and number of pipes used by the SLI packet output. */
|
||||
#define CN68XX_SLI_TX_PIPE 0x1230
|
||||
|
||||
/*######################## INTERRUPTS #########################*/
|
||||
|
||||
/*------------------ Interrupt Masks ----------------*/
|
||||
#define CN68XX_INTR_PIPE_ERR BIT_ULL(61)
|
||||
|
||||
#endif
|
1216
drivers/net/ethernet/cavium/liquidio/lio_ethtool.c
Normal file
1216
drivers/net/ethernet/cavium/liquidio/lio_ethtool.c
Normal file
File diff suppressed because it is too large
Load diff
3667
drivers/net/ethernet/cavium/liquidio/lio_main.c
Normal file
3667
drivers/net/ethernet/cavium/liquidio/lio_main.c
Normal file
File diff suppressed because it is too large
Load diff
673
drivers/net/ethernet/cavium/liquidio/liquidio_common.h
Normal file
673
drivers/net/ethernet/cavium/liquidio/liquidio_common.h
Normal file
|
@ -0,0 +1,673 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
|
||||
/*! \file liquidio_common.h
|
||||
* \brief Common: Structures and macros used in PCI-NIC package by core and
|
||||
* host driver.
|
||||
*/
|
||||
|
||||
#ifndef __LIQUIDIO_COMMON_H__
|
||||
#define __LIQUIDIO_COMMON_H__
|
||||
|
||||
#include "octeon_config.h"
|
||||
|
||||
#define LIQUIDIO_VERSION "1.1.9"
|
||||
#define LIQUIDIO_MAJOR_VERSION 1
|
||||
#define LIQUIDIO_MINOR_VERSION 1
|
||||
#define LIQUIDIO_MICRO_VERSION 9
|
||||
|
||||
#define CONTROL_IQ 0
|
||||
/** Tag types used by Octeon cores in its work. */
|
||||
enum octeon_tag_type {
|
||||
ORDERED_TAG = 0,
|
||||
ATOMIC_TAG = 1,
|
||||
NULL_TAG = 2,
|
||||
NULL_NULL_TAG = 3
|
||||
};
|
||||
|
||||
/* pre-defined host->NIC tag values */
|
||||
#define LIO_CONTROL (0x11111110)
|
||||
#define LIO_DATA(i) (0x11111111 + (i))
|
||||
|
||||
/* Opcodes used by host driver/apps to perform operations on the core.
|
||||
* These are used to identify the major subsystem that the operation
|
||||
* is for.
|
||||
*/
|
||||
#define OPCODE_CORE 0 /* used for generic core operations */
|
||||
#define OPCODE_NIC 1 /* used for NIC operations */
|
||||
#define OPCODE_LAST OPCODE_NIC
|
||||
|
||||
/* Subcodes are used by host driver/apps to identify the sub-operation
|
||||
* for the core. They only need to by unique for a given subsystem.
|
||||
*/
|
||||
#define OPCODE_SUBCODE(op, sub) (((op & 0x0f) << 8) | ((sub) & 0x7f))
|
||||
|
||||
/** OPCODE_CORE subcodes. For future use. */
|
||||
|
||||
/** OPCODE_NIC subcodes */
|
||||
|
||||
/* This subcode is sent by core PCI driver to indicate cores are ready. */
|
||||
#define OPCODE_NIC_CORE_DRV_ACTIVE 0x01
|
||||
#define OPCODE_NIC_NW_DATA 0x02 /* network packet data */
|
||||
#define OPCODE_NIC_CMD 0x03
|
||||
#define OPCODE_NIC_INFO 0x04
|
||||
#define OPCODE_NIC_PORT_STATS 0x05
|
||||
#define OPCODE_NIC_MDIO45 0x06
|
||||
#define OPCODE_NIC_TIMESTAMP 0x07
|
||||
#define OPCODE_NIC_INTRMOD_CFG 0x08
|
||||
#define OPCODE_NIC_IF_CFG 0x09
|
||||
|
||||
#define CORE_DRV_TEST_SCATTER_OP 0xFFF5
|
||||
|
||||
#define OPCODE_SLOW_PATH(rh) \
|
||||
(OPCODE_SUBCODE(rh->r.opcode, rh->r.subcode) != \
|
||||
OPCODE_SUBCODE(OPCODE_NIC, OPCODE_NIC_NW_DATA))
|
||||
|
||||
/* Application codes advertised by the core driver initialization packet. */
|
||||
#define CVM_DRV_APP_START 0x0
|
||||
#define CVM_DRV_NO_APP 0
|
||||
#define CVM_DRV_APP_COUNT 0x2
|
||||
#define CVM_DRV_BASE_APP (CVM_DRV_APP_START + 0x0)
|
||||
#define CVM_DRV_NIC_APP (CVM_DRV_APP_START + 0x1)
|
||||
#define CVM_DRV_INVALID_APP (CVM_DRV_APP_START + 0x2)
|
||||
#define CVM_DRV_APP_END (CVM_DRV_INVALID_APP - 1)
|
||||
|
||||
/* Macro to increment index.
|
||||
* Index is incremented by count; if the sum exceeds
|
||||
* max, index is wrapped-around to the start.
|
||||
*/
|
||||
#define INCR_INDEX(index, count, max) \
|
||||
do { \
|
||||
if (((index) + (count)) >= (max)) \
|
||||
index = ((index) + (count)) - (max); \
|
||||
else \
|
||||
index += (count); \
|
||||
} while (0)
|
||||
|
||||
#define INCR_INDEX_BY1(index, max) \
|
||||
do { \
|
||||
if ((++(index)) == (max)) \
|
||||
index = 0; \
|
||||
} while (0)
|
||||
|
||||
#define DECR_INDEX(index, count, max) \
|
||||
do { \
|
||||
if ((count) > (index)) \
|
||||
index = ((max) - ((count - index))); \
|
||||
else \
|
||||
index -= count; \
|
||||
} while (0)
|
||||
|
||||
#define OCT_BOARD_NAME 32
|
||||
#define OCT_SERIAL_LEN 64
|
||||
|
||||
/* Structure used by core driver to send indication that the Octeon
|
||||
* application is ready.
|
||||
*/
|
||||
struct octeon_core_setup {
|
||||
u64 corefreq;
|
||||
|
||||
char boardname[OCT_BOARD_NAME];
|
||||
|
||||
char board_serial_number[OCT_SERIAL_LEN];
|
||||
|
||||
u64 board_rev_major;
|
||||
|
||||
u64 board_rev_minor;
|
||||
|
||||
};
|
||||
|
||||
/*--------------------------- SCATTER GATHER ENTRY -----------------------*/
|
||||
|
||||
/* The Scatter-Gather List Entry. The scatter or gather component used with
|
||||
* a Octeon input instruction has this format.
|
||||
*/
|
||||
struct octeon_sg_entry {
|
||||
/** The first 64 bit gives the size of data in each dptr.*/
|
||||
union {
|
||||
u16 size[4];
|
||||
u64 size64;
|
||||
} u;
|
||||
|
||||
/** The 4 dptr pointers for this entry. */
|
||||
u64 ptr[4];
|
||||
|
||||
};
|
||||
|
||||
#define OCT_SG_ENTRY_SIZE (sizeof(struct octeon_sg_entry))
|
||||
|
||||
/* \brief Add size to gather list
|
||||
* @param sg_entry scatter/gather entry
|
||||
* @param size size to add
|
||||
* @param pos position to add it.
|
||||
*/
|
||||
static inline void add_sg_size(struct octeon_sg_entry *sg_entry,
|
||||
u16 size,
|
||||
u32 pos)
|
||||
{
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
sg_entry->u.size[pos] = size;
|
||||
#else
|
||||
sg_entry->u.size[3 - pos] = size;
|
||||
#endif
|
||||
}
|
||||
|
||||
/*------------------------- End Scatter/Gather ---------------------------*/
|
||||
|
||||
#define OCTNET_FRM_PTP_HEADER_SIZE 8
|
||||
#define OCTNET_FRM_HEADER_SIZE 30 /* PTP timestamp + VLAN + Ethernet */
|
||||
|
||||
#define OCTNET_MIN_FRM_SIZE (64 + OCTNET_FRM_PTP_HEADER_SIZE)
|
||||
#define OCTNET_MAX_FRM_SIZE (16000 + OCTNET_FRM_HEADER_SIZE)
|
||||
|
||||
#define OCTNET_DEFAULT_FRM_SIZE (1500 + OCTNET_FRM_HEADER_SIZE)
|
||||
|
||||
/** NIC Commands are sent using this Octeon Input Queue */
|
||||
#define OCTNET_CMD_Q 0
|
||||
|
||||
/* NIC Command types */
|
||||
#define OCTNET_CMD_CHANGE_MTU 0x1
|
||||
#define OCTNET_CMD_CHANGE_MACADDR 0x2
|
||||
#define OCTNET_CMD_CHANGE_DEVFLAGS 0x3
|
||||
#define OCTNET_CMD_RX_CTL 0x4
|
||||
|
||||
#define OCTNET_CMD_SET_MULTI_LIST 0x5
|
||||
#define OCTNET_CMD_CLEAR_STATS 0x6
|
||||
|
||||
/* command for setting the speed, duplex & autoneg */
|
||||
#define OCTNET_CMD_SET_SETTINGS 0x7
|
||||
#define OCTNET_CMD_SET_FLOW_CTL 0x8
|
||||
|
||||
#define OCTNET_CMD_MDIO_READ_WRITE 0x9
|
||||
#define OCTNET_CMD_GPIO_ACCESS 0xA
|
||||
#define OCTNET_CMD_LRO_ENABLE 0xB
|
||||
#define OCTNET_CMD_LRO_DISABLE 0xC
|
||||
#define OCTNET_CMD_SET_RSS 0xD
|
||||
#define OCTNET_CMD_WRITE_SA 0xE
|
||||
#define OCTNET_CMD_DELETE_SA 0xF
|
||||
#define OCTNET_CMD_UPDATE_SA 0x12
|
||||
|
||||
#define OCTNET_CMD_TNL_RX_CSUM_CTL 0x10
|
||||
#define OCTNET_CMD_TNL_TX_CSUM_CTL 0x11
|
||||
#define OCTNET_CMD_IPSECV2_AH_ESP_CTL 0x13
|
||||
#define OCTNET_CMD_VERBOSE_ENABLE 0x14
|
||||
#define OCTNET_CMD_VERBOSE_DISABLE 0x15
|
||||
|
||||
/* RX(packets coming from wire) Checksum verification flags */
|
||||
/* TCP/UDP csum */
|
||||
#define CNNIC_L4SUM_VERIFIED 0x1
|
||||
#define CNNIC_IPSUM_VERIFIED 0x2
|
||||
#define CNNIC_TUN_CSUM_VERIFIED 0x4
|
||||
#define CNNIC_CSUM_VERIFIED (CNNIC_IPSUM_VERIFIED | CNNIC_L4SUM_VERIFIED)
|
||||
|
||||
/*LROIPV4 and LROIPV6 Flags*/
|
||||
#define OCTNIC_LROIPV4 0x1
|
||||
#define OCTNIC_LROIPV6 0x2
|
||||
|
||||
/* Interface flags communicated between host driver and core app. */
|
||||
enum octnet_ifflags {
|
||||
OCTNET_IFFLAG_PROMISC = 0x01,
|
||||
OCTNET_IFFLAG_ALLMULTI = 0x02,
|
||||
OCTNET_IFFLAG_MULTICAST = 0x04,
|
||||
OCTNET_IFFLAG_BROADCAST = 0x08,
|
||||
OCTNET_IFFLAG_UNICAST = 0x10
|
||||
};
|
||||
|
||||
/* wqe
|
||||
* --------------- 0
|
||||
* | wqe word0-3 |
|
||||
* --------------- 32
|
||||
* | PCI IH |
|
||||
* --------------- 40
|
||||
* | RPTR |
|
||||
* --------------- 48
|
||||
* | PCI IRH |
|
||||
* --------------- 56
|
||||
* | OCT_NET_CMD |
|
||||
* --------------- 64
|
||||
* | Addtl 8-BData |
|
||||
* | |
|
||||
* ---------------
|
||||
*/
|
||||
|
||||
union octnet_cmd {
|
||||
u64 u64;
|
||||
|
||||
struct {
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
u64 cmd:5;
|
||||
|
||||
u64 more:6; /* How many udd words follow the command */
|
||||
|
||||
u64 param1:29;
|
||||
|
||||
u64 param2:16;
|
||||
|
||||
u64 param3:8;
|
||||
|
||||
#else
|
||||
|
||||
u64 param3:8;
|
||||
|
||||
u64 param2:16;
|
||||
|
||||
u64 param1:29;
|
||||
|
||||
u64 more:6;
|
||||
|
||||
u64 cmd:5;
|
||||
|
||||
#endif
|
||||
} s;
|
||||
|
||||
};
|
||||
|
||||
#define OCTNET_CMD_SIZE (sizeof(union octnet_cmd))
|
||||
|
||||
/** Instruction Header */
|
||||
struct octeon_instr_ih {
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
/** Raw mode indicator 1 = RAW */
|
||||
u64 raw:1;
|
||||
|
||||
/** Gather indicator 1=gather*/
|
||||
u64 gather:1;
|
||||
|
||||
/** Data length OR no. of entries in gather list */
|
||||
u64 dlengsz:14;
|
||||
|
||||
/** Front Data size */
|
||||
u64 fsz:6;
|
||||
|
||||
/** Packet Order / Work Unit selection (1 of 8)*/
|
||||
u64 qos:3;
|
||||
|
||||
/** Core group selection (1 of 16) */
|
||||
u64 grp:4;
|
||||
|
||||
/** Short Raw Packet Indicator 1=short raw pkt */
|
||||
u64 rs:1;
|
||||
|
||||
/** Tag type */
|
||||
u64 tagtype:2;
|
||||
|
||||
/** Tag Value */
|
||||
u64 tag:32;
|
||||
#else
|
||||
/** Tag Value */
|
||||
u64 tag:32;
|
||||
|
||||
/** Tag type */
|
||||
u64 tagtype:2;
|
||||
|
||||
/** Short Raw Packet Indicator 1=short raw pkt */
|
||||
u64 rs:1;
|
||||
|
||||
/** Core group selection (1 of 16) */
|
||||
u64 grp:4;
|
||||
|
||||
/** Packet Order / Work Unit selection (1 of 8)*/
|
||||
u64 qos:3;
|
||||
|
||||
/** Front Data size */
|
||||
u64 fsz:6;
|
||||
|
||||
/** Data length OR no. of entries in gather list */
|
||||
u64 dlengsz:14;
|
||||
|
||||
/** Gather indicator 1=gather*/
|
||||
u64 gather:1;
|
||||
|
||||
/** Raw mode indicator 1 = RAW */
|
||||
u64 raw:1;
|
||||
#endif
|
||||
};
|
||||
|
||||
/** Input Request Header */
|
||||
struct octeon_instr_irh {
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
u64 opcode:4;
|
||||
u64 rflag:1;
|
||||
u64 subcode:7;
|
||||
u64 len:3;
|
||||
u64 rid:13;
|
||||
u64 reserved:4;
|
||||
u64 ossp:32; /* opcode/subcode specific parameters */
|
||||
#else
|
||||
u64 ossp:32; /* opcode/subcode specific parameters */
|
||||
u64 reserved:4;
|
||||
u64 rid:13;
|
||||
u64 len:3;
|
||||
u64 subcode:7;
|
||||
u64 rflag:1;
|
||||
u64 opcode:4;
|
||||
#endif
|
||||
};
|
||||
|
||||
/** Return Data Parameters */
|
||||
struct octeon_instr_rdp {
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
u64 reserved:49;
|
||||
u64 pcie_port:3;
|
||||
u64 rlen:12;
|
||||
#else
|
||||
u64 rlen:12;
|
||||
u64 pcie_port:3;
|
||||
u64 reserved:49;
|
||||
#endif
|
||||
};
|
||||
|
||||
/** Receive Header */
|
||||
union octeon_rh {
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
u64 u64;
|
||||
struct {
|
||||
u64 opcode:4;
|
||||
u64 subcode:8;
|
||||
u64 len:3; /** additional 64-bit words */
|
||||
u64 rid:13; /** request id in response to pkt sent by host */
|
||||
u64 reserved:4;
|
||||
u64 ossp:32; /** opcode/subcode specific parameters */
|
||||
} r;
|
||||
struct {
|
||||
u64 opcode:4;
|
||||
u64 subcode:8;
|
||||
u64 len:3; /** additional 64-bit words */
|
||||
u64 rid:13; /** request id in response to pkt sent by host */
|
||||
u64 extra:24;
|
||||
u64 link:8;
|
||||
u64 csum_verified:3; /** checksum verified. */
|
||||
u64 has_hwtstamp:1; /** Has hardware timestamp. 1 = yes. */
|
||||
} r_dh;
|
||||
struct {
|
||||
u64 opcode:4;
|
||||
u64 subcode:8;
|
||||
u64 len:3; /** additional 64-bit words */
|
||||
u64 rid:13; /** request id in response to pkt sent by host */
|
||||
u64 num_gmx_ports:8;
|
||||
u64 max_nic_ports:8;
|
||||
u64 app_cap_flags:4;
|
||||
u64 app_mode:16;
|
||||
} r_core_drv_init;
|
||||
struct {
|
||||
u64 opcode:4;
|
||||
u64 subcode:8;
|
||||
u64 len:3; /** additional 64-bit words */
|
||||
u64 rid:13;
|
||||
u64 reserved:4;
|
||||
u64 extra:25;
|
||||
u64 ifidx:7;
|
||||
} r_nic_info;
|
||||
#else
|
||||
u64 u64;
|
||||
struct {
|
||||
u64 ossp:32; /** opcode/subcode specific parameters */
|
||||
u64 reserved:4;
|
||||
u64 rid:13; /** req id in response to pkt sent by host */
|
||||
u64 len:3; /** additional 64-bit words */
|
||||
u64 subcode:8;
|
||||
u64 opcode:4;
|
||||
} r;
|
||||
struct {
|
||||
u64 has_hwtstamp:1; /** 1 = has hwtstamp */
|
||||
u64 csum_verified:3; /** checksum verified. */
|
||||
u64 link:8;
|
||||
u64 extra:24;
|
||||
u64 rid:13; /** req id in response to pkt sent by host */
|
||||
u64 len:3; /** additional 64-bit words */
|
||||
u64 subcode:8;
|
||||
u64 opcode:4;
|
||||
} r_dh;
|
||||
struct {
|
||||
u64 app_mode:16;
|
||||
u64 app_cap_flags:4;
|
||||
u64 max_nic_ports:8;
|
||||
u64 num_gmx_ports:8;
|
||||
u64 rid:13;
|
||||
u64 len:3; /** additional 64-bit words */
|
||||
u64 subcode:8;
|
||||
u64 opcode:4;
|
||||
} r_core_drv_init;
|
||||
struct {
|
||||
u64 ifidx:7;
|
||||
u64 extra:25;
|
||||
u64 reserved:4;
|
||||
u64 rid:13;
|
||||
u64 len:3; /** additional 64-bit words */
|
||||
u64 subcode:8;
|
||||
u64 opcode:4;
|
||||
} r_nic_info;
|
||||
#endif
|
||||
};
|
||||
|
||||
#define OCT_RH_SIZE (sizeof(union octeon_rh))
|
||||
|
||||
#define OCT_PKT_PARAM_IPV4OPTS 1
|
||||
#define OCT_PKT_PARAM_IPV6EXTHDR 2
|
||||
|
||||
union octnic_packet_params {
|
||||
u32 u32;
|
||||
struct {
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
u32 reserved:6;
|
||||
u32 tnl_csum:1;
|
||||
u32 ip_csum:1;
|
||||
u32 ipv4opts_ipv6exthdr:2;
|
||||
u32 ipsec_ops:4;
|
||||
u32 tsflag:1;
|
||||
u32 csoffset:9;
|
||||
u32 ifidx:8;
|
||||
#else
|
||||
u32 ifidx:8;
|
||||
u32 csoffset:9;
|
||||
u32 tsflag:1;
|
||||
u32 ipsec_ops:4;
|
||||
u32 ipv4opts_ipv6exthdr:2;
|
||||
u32 ip_csum:1;
|
||||
u32 tnl_csum:1;
|
||||
u32 reserved:6;
|
||||
#endif
|
||||
} s;
|
||||
};
|
||||
|
||||
/** Status of a RGMII Link on Octeon as seen by core driver. */
|
||||
union oct_link_status {
|
||||
u64 u64;
|
||||
|
||||
struct {
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
u64 duplex:8;
|
||||
u64 status:8;
|
||||
u64 mtu:16;
|
||||
u64 speed:16;
|
||||
u64 autoneg:1;
|
||||
u64 interface:4;
|
||||
u64 pause:1;
|
||||
u64 reserved:10;
|
||||
#else
|
||||
u64 reserved:10;
|
||||
u64 pause:1;
|
||||
u64 interface:4;
|
||||
u64 autoneg:1;
|
||||
u64 speed:16;
|
||||
u64 mtu:16;
|
||||
u64 status:8;
|
||||
u64 duplex:8;
|
||||
#endif
|
||||
} s;
|
||||
};
|
||||
|
||||
/** Information for a OCTEON ethernet interface shared between core & host. */
|
||||
struct oct_link_info {
|
||||
union oct_link_status link;
|
||||
u64 hw_addr;
|
||||
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
u16 gmxport;
|
||||
u8 rsvd[3];
|
||||
u8 num_txpciq;
|
||||
u8 num_rxpciq;
|
||||
u8 ifidx;
|
||||
#else
|
||||
u8 ifidx;
|
||||
u8 num_rxpciq;
|
||||
u8 num_txpciq;
|
||||
u8 rsvd[3];
|
||||
u16 gmxport;
|
||||
#endif
|
||||
|
||||
u8 txpciq[MAX_IOQS_PER_NICIF];
|
||||
u8 rxpciq[MAX_IOQS_PER_NICIF];
|
||||
};
|
||||
|
||||
#define OCT_LINK_INFO_SIZE (sizeof(struct oct_link_info))
|
||||
|
||||
struct liquidio_if_cfg_info {
|
||||
u64 ifidx;
|
||||
u64 iqmask; /** mask for IQs enabled for the port */
|
||||
u64 oqmask; /** mask for OQs enabled for the port */
|
||||
struct oct_link_info linfo; /** initial link information */
|
||||
};
|
||||
|
||||
/** Stats for each NIC port in RX direction. */
|
||||
struct nic_rx_stats {
|
||||
/* link-level stats */
|
||||
u64 total_rcvd;
|
||||
u64 bytes_rcvd;
|
||||
u64 total_bcst;
|
||||
u64 total_mcst;
|
||||
u64 runts;
|
||||
u64 ctl_rcvd;
|
||||
u64 fifo_err; /* Accounts for over/under-run of buffers */
|
||||
u64 dmac_drop;
|
||||
u64 fcs_err;
|
||||
u64 jabber_err;
|
||||
u64 l2_err;
|
||||
u64 frame_err;
|
||||
|
||||
/* firmware stats */
|
||||
u64 fw_total_rcvd;
|
||||
u64 fw_total_fwd;
|
||||
u64 fw_err_pko;
|
||||
u64 fw_err_link;
|
||||
u64 fw_err_drop;
|
||||
u64 fw_lro_pkts; /* Number of packets that are LROed */
|
||||
u64 fw_lro_octs; /* Number of octets that are LROed */
|
||||
u64 fw_total_lro; /* Number of LRO packets formed */
|
||||
u64 fw_lro_aborts; /* Number of times lRO of packet aborted */
|
||||
/* intrmod: packet forward rate */
|
||||
u64 fwd_rate;
|
||||
};
|
||||
|
||||
/** Stats for each NIC port in RX direction. */
|
||||
struct nic_tx_stats {
|
||||
/* link-level stats */
|
||||
u64 total_pkts_sent;
|
||||
u64 total_bytes_sent;
|
||||
u64 mcast_pkts_sent;
|
||||
u64 bcast_pkts_sent;
|
||||
u64 ctl_sent;
|
||||
u64 one_collision_sent; /* Packets sent after one collision*/
|
||||
u64 multi_collision_sent; /* Packets sent after multiple collision*/
|
||||
u64 max_collision_fail; /* Packets not sent due to max collisions */
|
||||
u64 max_deferral_fail; /* Packets not sent due to max deferrals */
|
||||
u64 fifo_err; /* Accounts for over/under-run of buffers */
|
||||
u64 runts;
|
||||
u64 total_collisions; /* Total number of collisions detected */
|
||||
|
||||
/* firmware stats */
|
||||
u64 fw_total_sent;
|
||||
u64 fw_total_fwd;
|
||||
u64 fw_err_pko;
|
||||
u64 fw_err_link;
|
||||
u64 fw_err_drop;
|
||||
};
|
||||
|
||||
struct oct_link_stats {
|
||||
struct nic_rx_stats fromwire;
|
||||
struct nic_tx_stats fromhost;
|
||||
|
||||
};
|
||||
|
||||
#define LIO68XX_LED_CTRL_ADDR 0x3501
|
||||
#define LIO68XX_LED_CTRL_CFGON 0x1f
|
||||
#define LIO68XX_LED_CTRL_CFGOFF 0x100
|
||||
#define LIO68XX_LED_BEACON_ADDR 0x3508
|
||||
#define LIO68XX_LED_BEACON_CFGON 0x47fd
|
||||
#define LIO68XX_LED_BEACON_CFGOFF 0x11fc
|
||||
#define VITESSE_PHY_GPIO_DRIVEON 0x1
|
||||
#define VITESSE_PHY_GPIO_CFG 0x8
|
||||
#define VITESSE_PHY_GPIO_DRIVEOFF 0x4
|
||||
#define VITESSE_PHY_GPIO_HIGH 0x2
|
||||
#define VITESSE_PHY_GPIO_LOW 0x3
|
||||
|
||||
struct oct_mdio_cmd {
|
||||
u64 op;
|
||||
u64 mdio_addr;
|
||||
u64 value1;
|
||||
u64 value2;
|
||||
u64 value3;
|
||||
};
|
||||
|
||||
#define OCT_LINK_STATS_SIZE (sizeof(struct oct_link_stats))
|
||||
|
||||
#define LIO_INTRMOD_CHECK_INTERVAL 1
|
||||
#define LIO_INTRMOD_MAXPKT_RATETHR 196608 /* max pkt rate threshold */
|
||||
#define LIO_INTRMOD_MINPKT_RATETHR 9216 /* min pkt rate threshold */
|
||||
#define LIO_INTRMOD_MAXCNT_TRIGGER 384 /* max pkts to trigger interrupt */
|
||||
#define LIO_INTRMOD_MINCNT_TRIGGER 1 /* min pkts to trigger interrupt */
|
||||
#define LIO_INTRMOD_MAXTMR_TRIGGER 128 /* max time to trigger interrupt */
|
||||
#define LIO_INTRMOD_MINTMR_TRIGGER 32 /* min time to trigger interrupt */
|
||||
|
||||
struct oct_intrmod_cfg {
|
||||
u64 intrmod_enable;
|
||||
u64 intrmod_check_intrvl;
|
||||
u64 intrmod_maxpkt_ratethr;
|
||||
u64 intrmod_minpkt_ratethr;
|
||||
u64 intrmod_maxcnt_trigger;
|
||||
u64 intrmod_maxtmr_trigger;
|
||||
u64 intrmod_mincnt_trigger;
|
||||
u64 intrmod_mintmr_trigger;
|
||||
};
|
||||
|
||||
#define BASE_QUEUE_NOT_REQUESTED 65535
|
||||
|
||||
union oct_nic_if_cfg {
|
||||
u64 u64;
|
||||
struct {
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
u64 base_queue:16;
|
||||
u64 num_iqueues:16;
|
||||
u64 num_oqueues:16;
|
||||
u64 gmx_port_id:8;
|
||||
u64 reserved:8;
|
||||
#else
|
||||
u64 reserved:8;
|
||||
u64 gmx_port_id:8;
|
||||
u64 num_oqueues:16;
|
||||
u64 num_iqueues:16;
|
||||
u64 base_queue:16;
|
||||
#endif
|
||||
} s;
|
||||
};
|
||||
|
||||
#endif
|
57
drivers/net/ethernet/cavium/liquidio/liquidio_image.h
Normal file
57
drivers/net/ethernet/cavium/liquidio/liquidio_image.h
Normal file
|
@ -0,0 +1,57 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
#ifndef _LIQUIDIO_IMAGE_H_
|
||||
#define _LIQUIDIO_IMAGE_H_
|
||||
|
||||
#define LIO_MAX_FW_TYPE_LEN (8)
|
||||
#define LIO_MAX_FW_FILENAME_LEN (256)
|
||||
#define LIO_FW_DIR "liquidio/"
|
||||
#define LIO_FW_BASE_NAME "lio_"
|
||||
#define LIO_FW_NAME_SUFFIX ".bin"
|
||||
#define LIO_FW_NAME_TYPE_NIC "nic"
|
||||
#define LIO_FW_NAME_TYPE_NONE "none"
|
||||
#define LIO_MAX_FIRMWARE_VERSION_LEN 16
|
||||
|
||||
#define LIO_MAX_BOOTCMD_LEN 1024
|
||||
#define LIO_MAX_IMAGES 16
|
||||
#define LIO_NIC_MAGIC 0x434E4943 /* "CNIC" */
|
||||
struct octeon_firmware_desc {
|
||||
u64 addr;
|
||||
u32 len;
|
||||
u32 crc32; /* crc32 of image */
|
||||
};
|
||||
|
||||
/* Following the header is a list of 64-bit aligned binary images,
|
||||
* as described by the desc field.
|
||||
* Numeric fields are in network byte order.
|
||||
*/
|
||||
struct octeon_firmware_file_header {
|
||||
u32 magic;
|
||||
char version[LIO_MAX_FIRMWARE_VERSION_LEN];
|
||||
char bootcmd[LIO_MAX_BOOTCMD_LEN];
|
||||
u32 num_images;
|
||||
struct octeon_firmware_desc desc[LIO_MAX_IMAGES];
|
||||
u32 pad;
|
||||
u32 crc32; /* header checksum */
|
||||
};
|
||||
|
||||
#endif /* _LIQUIDIO_IMAGE_H_ */
|
424
drivers/net/ethernet/cavium/liquidio/octeon_config.h
Normal file
424
drivers/net/ethernet/cavium/liquidio/octeon_config.h
Normal file
|
@ -0,0 +1,424 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
|
||||
/*! \file octeon_config.h
|
||||
* \brief Host Driver: Configuration data structures for the host driver.
|
||||
*/
|
||||
|
||||
#ifndef __OCTEON_CONFIG_H__
|
||||
#define __OCTEON_CONFIG_H__
|
||||
|
||||
/*--------------------------CONFIG VALUES------------------------*/
|
||||
|
||||
/* The following macros affect the way the driver data structures
|
||||
* are generated for Octeon devices.
|
||||
* They can be modified.
|
||||
*/
|
||||
|
||||
/* Maximum octeon devices defined as MAX_OCTEON_NICIF to support
|
||||
* multiple(<= MAX_OCTEON_NICIF) Miniports
|
||||
*/
|
||||
#define MAX_OCTEON_NICIF 32
|
||||
#define MAX_OCTEON_DEVICES MAX_OCTEON_NICIF
|
||||
#define MAX_OCTEON_LINKS MAX_OCTEON_NICIF
|
||||
#define MAX_OCTEON_MULTICAST_ADDR 32
|
||||
|
||||
/* CN6xxx IQ configuration macros */
|
||||
#define CN6XXX_MAX_INPUT_QUEUES 32
|
||||
#define CN6XXX_MAX_IQ_DESCRIPTORS 2048
|
||||
#define CN6XXX_DB_MIN 1
|
||||
#define CN6XXX_DB_MAX 8
|
||||
#define CN6XXX_DB_TIMEOUT 1
|
||||
|
||||
/* CN6xxx OQ configuration macros */
|
||||
#define CN6XXX_MAX_OUTPUT_QUEUES 32
|
||||
#define CN6XXX_MAX_OQ_DESCRIPTORS 2048
|
||||
#define CN6XXX_OQ_BUF_SIZE 1536
|
||||
#define CN6XXX_OQ_PKTSPER_INTR ((CN6XXX_MAX_OQ_DESCRIPTORS < 512) ? \
|
||||
(CN6XXX_MAX_OQ_DESCRIPTORS / 4) : 128)
|
||||
#define CN6XXX_OQ_REFIL_THRESHOLD ((CN6XXX_MAX_OQ_DESCRIPTORS < 512) ? \
|
||||
(CN6XXX_MAX_OQ_DESCRIPTORS / 4) : 128)
|
||||
|
||||
#define CN6XXX_OQ_INTR_PKT 64
|
||||
#define CN6XXX_OQ_INTR_TIME 100
|
||||
#define DEFAULT_NUM_NIC_PORTS_66XX 2
|
||||
#define DEFAULT_NUM_NIC_PORTS_68XX 4
|
||||
#define DEFAULT_NUM_NIC_PORTS_68XX_210NV 2
|
||||
|
||||
/* common OCTEON configuration macros */
|
||||
#define CN6XXX_CFG_IO_QUEUES 32
|
||||
#define OCTEON_32BYTE_INSTR 32
|
||||
#define OCTEON_64BYTE_INSTR 64
|
||||
#define OCTEON_MAX_BASE_IOQ 4
|
||||
#define OCTEON_OQ_BUFPTR_MODE 0
|
||||
#define OCTEON_OQ_INFOPTR_MODE 1
|
||||
|
||||
#define OCTEON_DMA_INTR_PKT 64
|
||||
#define OCTEON_DMA_INTR_TIME 1000
|
||||
|
||||
#define MAX_TXQS_PER_INTF 8
|
||||
#define MAX_RXQS_PER_INTF 8
|
||||
#define DEF_TXQS_PER_INTF 4
|
||||
#define DEF_RXQS_PER_INTF 4
|
||||
|
||||
#define INVALID_IOQ_NO 0xff
|
||||
|
||||
#define DEFAULT_POW_GRP 0
|
||||
|
||||
/* Macros to get octeon config params */
|
||||
#define CFG_GET_IQ_CFG(cfg) ((cfg)->iq)
|
||||
#define CFG_GET_IQ_MAX_Q(cfg) ((cfg)->iq.max_iqs)
|
||||
#define CFG_GET_IQ_PENDING_LIST_SIZE(cfg) ((cfg)->iq.pending_list_size)
|
||||
#define CFG_GET_IQ_INSTR_TYPE(cfg) ((cfg)->iq.instr_type)
|
||||
#define CFG_GET_IQ_DB_MIN(cfg) ((cfg)->iq.db_min)
|
||||
#define CFG_GET_IQ_DB_TIMEOUT(cfg) ((cfg)->iq.db_timeout)
|
||||
|
||||
#define CFG_GET_OQ_MAX_Q(cfg) ((cfg)->oq.max_oqs)
|
||||
#define CFG_GET_OQ_INFO_PTR(cfg) ((cfg)->oq.info_ptr)
|
||||
#define CFG_GET_OQ_PKTS_PER_INTR(cfg) ((cfg)->oq.pkts_per_intr)
|
||||
#define CFG_GET_OQ_REFILL_THRESHOLD(cfg) ((cfg)->oq.refill_threshold)
|
||||
#define CFG_GET_OQ_INTR_PKT(cfg) ((cfg)->oq.oq_intr_pkt)
|
||||
#define CFG_GET_OQ_INTR_TIME(cfg) ((cfg)->oq.oq_intr_time)
|
||||
#define CFG_SET_OQ_INTR_PKT(cfg, val) (cfg)->oq.oq_intr_pkt = val
|
||||
#define CFG_SET_OQ_INTR_TIME(cfg, val) (cfg)->oq.oq_intr_time = val
|
||||
|
||||
#define CFG_GET_DMA_INTR_PKT(cfg) ((cfg)->dma.dma_intr_pkt)
|
||||
#define CFG_GET_DMA_INTR_TIME(cfg) ((cfg)->dma.dma_intr_time)
|
||||
#define CFG_GET_NUM_NIC_PORTS(cfg) ((cfg)->num_nic_ports)
|
||||
#define CFG_GET_NUM_DEF_TX_DESCS(cfg) ((cfg)->num_def_tx_descs)
|
||||
#define CFG_GET_NUM_DEF_RX_DESCS(cfg) ((cfg)->num_def_rx_descs)
|
||||
#define CFG_GET_DEF_RX_BUF_SIZE(cfg) ((cfg)->def_rx_buf_size)
|
||||
|
||||
#define CFG_GET_MAX_TXQS_NIC_IF(cfg, idx) \
|
||||
((cfg)->nic_if_cfg[idx].max_txqs)
|
||||
#define CFG_GET_NUM_TXQS_NIC_IF(cfg, idx) \
|
||||
((cfg)->nic_if_cfg[idx].num_txqs)
|
||||
#define CFG_GET_MAX_RXQS_NIC_IF(cfg, idx) \
|
||||
((cfg)->nic_if_cfg[idx].max_rxqs)
|
||||
#define CFG_GET_NUM_RXQS_NIC_IF(cfg, idx) \
|
||||
((cfg)->nic_if_cfg[idx].num_rxqs)
|
||||
#define CFG_GET_NUM_RX_DESCS_NIC_IF(cfg, idx) \
|
||||
((cfg)->nic_if_cfg[idx].num_rx_descs)
|
||||
#define CFG_GET_NUM_TX_DESCS_NIC_IF(cfg, idx) \
|
||||
((cfg)->nic_if_cfg[idx].num_tx_descs)
|
||||
#define CFG_GET_NUM_RX_BUF_SIZE_NIC_IF(cfg, idx) \
|
||||
((cfg)->nic_if_cfg[idx].rx_buf_size)
|
||||
#define CFG_GET_BASE_QUE_NIC_IF(cfg, idx) \
|
||||
((cfg)->nic_if_cfg[idx].base_queue)
|
||||
#define CFG_GET_GMXID_NIC_IF(cfg, idx) \
|
||||
((cfg)->nic_if_cfg[idx].gmx_port_id)
|
||||
|
||||
#define CFG_GET_CTRL_Q_GRP(cfg) ((cfg)->misc.ctrlq_grp)
|
||||
#define CFG_GET_HOST_LINK_QUERY_INTERVAL(cfg) \
|
||||
((cfg)->misc.host_link_query_interval)
|
||||
#define CFG_GET_OCT_LINK_QUERY_INTERVAL(cfg) \
|
||||
((cfg)->misc.oct_link_query_interval)
|
||||
#define CFG_GET_IS_SLI_BP_ON(cfg) ((cfg)->misc.enable_sli_oq_bp)
|
||||
|
||||
/* Max IOQs per OCTEON Link */
|
||||
#define MAX_IOQS_PER_NICIF 32
|
||||
|
||||
enum lio_card_type {
|
||||
LIO_210SV = 0, /* Two port, 66xx */
|
||||
LIO_210NV, /* Two port, 68xx */
|
||||
LIO_410NV /* Four port, 68xx */
|
||||
};
|
||||
|
||||
#define LIO_210SV_NAME "210sv"
|
||||
#define LIO_210NV_NAME "210nv"
|
||||
#define LIO_410NV_NAME "410nv"
|
||||
|
||||
/** Structure to define the configuration attributes for each Input queue.
|
||||
* Applicable to all Octeon processors
|
||||
**/
|
||||
struct octeon_iq_config {
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
u64 reserved:32;
|
||||
|
||||
/** Minimum ticks to wait before checking for pending instructions. */
|
||||
u64 db_timeout:16;
|
||||
|
||||
/** Minimum number of commands pending to be posted to Octeon
|
||||
* before driver hits the Input queue doorbell.
|
||||
*/
|
||||
u64 db_min:8;
|
||||
|
||||
/** Command size - 32 or 64 bytes */
|
||||
u64 instr_type:32;
|
||||
|
||||
/** Pending list size (usually set to the sum of the size of all Input
|
||||
* queues)
|
||||
*/
|
||||
u64 pending_list_size:32;
|
||||
|
||||
/* Max number of IQs available */
|
||||
u64 max_iqs:8;
|
||||
#else
|
||||
/* Max number of IQs available */
|
||||
u64 max_iqs:8;
|
||||
|
||||
/** Pending list size (usually set to the sum of the size of all Input
|
||||
* queues)
|
||||
*/
|
||||
u64 pending_list_size:32;
|
||||
|
||||
/** Command size - 32 or 64 bytes */
|
||||
u64 instr_type:32;
|
||||
|
||||
/** Minimum number of commands pending to be posted to Octeon
|
||||
* before driver hits the Input queue doorbell.
|
||||
*/
|
||||
u64 db_min:8;
|
||||
|
||||
/** Minimum ticks to wait before checking for pending instructions. */
|
||||
u64 db_timeout:16;
|
||||
|
||||
u64 reserved:32;
|
||||
#endif
|
||||
};
|
||||
|
||||
/** Structure to define the configuration attributes for each Output queue.
|
||||
* Applicable to all Octeon processors
|
||||
**/
|
||||
struct octeon_oq_config {
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
u64 reserved:16;
|
||||
|
||||
u64 pkts_per_intr:16;
|
||||
|
||||
/** Interrupt Coalescing (Time Interval). Octeon will interrupt the
|
||||
* host if atleast one packet was sent in the time interval specified
|
||||
* by this field. The driver uses time interval interrupt coalescing
|
||||
* by default. The time is specified in microseconds.
|
||||
*/
|
||||
u64 oq_intr_time:16;
|
||||
|
||||
/** Interrupt Coalescing (Packet Count). Octeon will interrupt the host
|
||||
* only if it sent as many packets as specified by this field.
|
||||
* The driver
|
||||
* usually does not use packet count interrupt coalescing.
|
||||
*/
|
||||
u64 oq_intr_pkt:16;
|
||||
|
||||
/** The number of buffers that were consumed during packet processing by
|
||||
* the driver on this Output queue before the driver attempts to
|
||||
* replenish
|
||||
* the descriptor ring with new buffers.
|
||||
*/
|
||||
u64 refill_threshold:16;
|
||||
|
||||
/** If set, the Output queue uses info-pointer mode. (Default: 1 ) */
|
||||
u64 info_ptr:32;
|
||||
|
||||
/* Max number of OQs available */
|
||||
u64 max_oqs:8;
|
||||
|
||||
#else
|
||||
/* Max number of OQs available */
|
||||
u64 max_oqs:8;
|
||||
|
||||
/** If set, the Output queue uses info-pointer mode. (Default: 1 ) */
|
||||
u64 info_ptr:32;
|
||||
|
||||
/** The number of buffers that were consumed during packet processing by
|
||||
* the driver on this Output queue before the driver attempts to
|
||||
* replenish
|
||||
* the descriptor ring with new buffers.
|
||||
*/
|
||||
u64 refill_threshold:16;
|
||||
|
||||
/** Interrupt Coalescing (Packet Count). Octeon will interrupt the host
|
||||
* only if it sent as many packets as specified by this field.
|
||||
* The driver
|
||||
* usually does not use packet count interrupt coalescing.
|
||||
*/
|
||||
u64 oq_intr_pkt:16;
|
||||
|
||||
/** Interrupt Coalescing (Time Interval). Octeon will interrupt the
|
||||
* host if atleast one packet was sent in the time interval specified
|
||||
* by this field. The driver uses time interval interrupt coalescing
|
||||
* by default. The time is specified in microseconds.
|
||||
*/
|
||||
u64 oq_intr_time:16;
|
||||
|
||||
u64 pkts_per_intr:16;
|
||||
|
||||
u64 reserved:16;
|
||||
#endif
|
||||
|
||||
};
|
||||
|
||||
/** This structure conatins the NIC link configuration attributes,
|
||||
* common for all the OCTEON Modles.
|
||||
*/
|
||||
struct octeon_nic_if_config {
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
u64 reserved:56;
|
||||
|
||||
u64 base_queue:16;
|
||||
|
||||
u64 gmx_port_id:8;
|
||||
|
||||
/* SKB size, We need not change buf size even for Jumbo frames.
|
||||
* Octeon can send jumbo frames in 4 consecutive descriptors,
|
||||
*/
|
||||
u64 rx_buf_size:16;
|
||||
|
||||
/* Num of desc for tx rings */
|
||||
u64 num_tx_descs:16;
|
||||
|
||||
/* Num of desc for rx rings */
|
||||
u64 num_rx_descs:16;
|
||||
|
||||
/* Actual configured value. Range could be: 1...max_rxqs */
|
||||
u64 num_rxqs:16;
|
||||
|
||||
/* Max Rxqs: Half for each of the two ports :max_oq/2 */
|
||||
u64 max_rxqs:16;
|
||||
|
||||
/* Actual configured value. Range could be: 1...max_txqs */
|
||||
u64 num_txqs:16;
|
||||
|
||||
/* Max Txqs: Half for each of the two ports :max_iq/2 */
|
||||
u64 max_txqs:16;
|
||||
#else
|
||||
/* Max Txqs: Half for each of the two ports :max_iq/2 */
|
||||
u64 max_txqs:16;
|
||||
|
||||
/* Actual configured value. Range could be: 1...max_txqs */
|
||||
u64 num_txqs:16;
|
||||
|
||||
/* Max Rxqs: Half for each of the two ports :max_oq/2 */
|
||||
u64 max_rxqs:16;
|
||||
|
||||
/* Actual configured value. Range could be: 1...max_rxqs */
|
||||
u64 num_rxqs:16;
|
||||
|
||||
/* Num of desc for rx rings */
|
||||
u64 num_rx_descs:16;
|
||||
|
||||
/* Num of desc for tx rings */
|
||||
u64 num_tx_descs:16;
|
||||
|
||||
/* SKB size, We need not change buf size even for Jumbo frames.
|
||||
* Octeon can send jumbo frames in 4 consecutive descriptors,
|
||||
*/
|
||||
u64 rx_buf_size:16;
|
||||
|
||||
u64 gmx_port_id:8;
|
||||
|
||||
u64 base_queue:16;
|
||||
|
||||
u64 reserved:56;
|
||||
#endif
|
||||
|
||||
};
|
||||
|
||||
/** Structure to define the configuration attributes for meta data.
|
||||
* Applicable to all Octeon processors.
|
||||
*/
|
||||
|
||||
struct octeon_misc_config {
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
/** Host link status polling period */
|
||||
u64 host_link_query_interval:32;
|
||||
/** Oct link status polling period */
|
||||
u64 oct_link_query_interval:32;
|
||||
|
||||
u64 enable_sli_oq_bp:1;
|
||||
/** Control IQ Group */
|
||||
u64 ctrlq_grp:4;
|
||||
#else
|
||||
/** Control IQ Group */
|
||||
u64 ctrlq_grp:4;
|
||||
/** BP for SLI OQ */
|
||||
u64 enable_sli_oq_bp:1;
|
||||
/** Host link status polling period */
|
||||
u64 oct_link_query_interval:32;
|
||||
/** Oct link status polling period */
|
||||
u64 host_link_query_interval:32;
|
||||
#endif
|
||||
};
|
||||
|
||||
/** Structure to define the configuration for all OCTEON processors. */
|
||||
struct octeon_config {
|
||||
u16 card_type;
|
||||
char *card_name;
|
||||
|
||||
/** Input Queue attributes. */
|
||||
struct octeon_iq_config iq;
|
||||
|
||||
/** Output Queue attributes. */
|
||||
struct octeon_oq_config oq;
|
||||
|
||||
/** NIC Port Configuration */
|
||||
struct octeon_nic_if_config nic_if_cfg[MAX_OCTEON_NICIF];
|
||||
|
||||
/** Miscellaneous attributes */
|
||||
struct octeon_misc_config misc;
|
||||
|
||||
int num_nic_ports;
|
||||
|
||||
int num_def_tx_descs;
|
||||
|
||||
/* Num of desc for rx rings */
|
||||
int num_def_rx_descs;
|
||||
|
||||
int def_rx_buf_size;
|
||||
|
||||
};
|
||||
|
||||
/* The following config values are fixed and should not be modified. */
|
||||
|
||||
/* Maximum address space to be mapped for Octeon's BAR1 index-based access. */
|
||||
#define MAX_BAR1_MAP_INDEX 2
|
||||
#define OCTEON_BAR1_ENTRY_SIZE (4 * 1024 * 1024)
|
||||
|
||||
/* BAR1 Index 0 to (MAX_BAR1_MAP_INDEX - 1) for normal mapped memory access.
|
||||
* Bar1 register at MAX_BAR1_MAP_INDEX used by driver for dynamic access.
|
||||
*/
|
||||
#define MAX_BAR1_IOREMAP_SIZE ((MAX_BAR1_MAP_INDEX + 1) * \
|
||||
OCTEON_BAR1_ENTRY_SIZE)
|
||||
|
||||
/* Response lists - 1 ordered, 1 unordered-blocking, 1 unordered-nonblocking
|
||||
* NoResponse Lists are now maintained with each IQ. (Dec' 2007).
|
||||
*/
|
||||
#define MAX_RESPONSE_LISTS 4
|
||||
|
||||
/* Opcode hash bits. The opcode is hashed on the lower 6-bits to lookup the
|
||||
* dispatch table.
|
||||
*/
|
||||
#define OPCODE_MASK_BITS 6
|
||||
|
||||
/* Mask for the 6-bit lookup hash */
|
||||
#define OCTEON_OPCODE_MASK 0x3f
|
||||
|
||||
/* Size of the dispatch table. The 6-bit hash can index into 2^6 entries */
|
||||
#define DISPATCH_LIST_SIZE BIT(OPCODE_MASK_BITS)
|
||||
|
||||
/* Maximum number of Octeon Instruction (command) queues */
|
||||
#define MAX_OCTEON_INSTR_QUEUES CN6XXX_MAX_INPUT_QUEUES
|
||||
|
||||
/* Maximum number of Octeon Instruction (command) queues */
|
||||
#define MAX_OCTEON_OUTPUT_QUEUES CN6XXX_MAX_OUTPUT_QUEUES
|
||||
|
||||
#endif /* __OCTEON_CONFIG_H__ */
|
723
drivers/net/ethernet/cavium/liquidio/octeon_console.c
Normal file
723
drivers/net/ethernet/cavium/liquidio/octeon_console.c
Normal file
|
@ -0,0 +1,723 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
|
||||
/**
|
||||
* @file octeon_console.c
|
||||
*/
|
||||
#include <linux/version.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include "octeon_config.h"
|
||||
#include "liquidio_common.h"
|
||||
#include "octeon_droq.h"
|
||||
#include "octeon_iq.h"
|
||||
#include "response_manager.h"
|
||||
#include "octeon_device.h"
|
||||
#include "octeon_nic.h"
|
||||
#include "octeon_main.h"
|
||||
#include "octeon_network.h"
|
||||
#include "cn66xx_regs.h"
|
||||
#include "cn66xx_device.h"
|
||||
#include "cn68xx_regs.h"
|
||||
#include "cn68xx_device.h"
|
||||
#include "liquidio_image.h"
|
||||
#include "octeon_mem_ops.h"
|
||||
|
||||
static void octeon_remote_lock(void);
|
||||
static void octeon_remote_unlock(void);
|
||||
static u64 cvmx_bootmem_phy_named_block_find(struct octeon_device *oct,
|
||||
const char *name,
|
||||
u32 flags);
|
||||
|
||||
#define MIN(a, b) min((a), (b))
|
||||
#define CAST_ULL(v) ((u64)(v))
|
||||
|
||||
#define BOOTLOADER_PCI_READ_BUFFER_DATA_ADDR 0x0006c008
|
||||
#define BOOTLOADER_PCI_READ_BUFFER_LEN_ADDR 0x0006c004
|
||||
#define BOOTLOADER_PCI_READ_BUFFER_OWNER_ADDR 0x0006c000
|
||||
#define BOOTLOADER_PCI_READ_DESC_ADDR 0x0006c100
|
||||
#define BOOTLOADER_PCI_WRITE_BUFFER_STR_LEN 248
|
||||
|
||||
#define OCTEON_PCI_IO_BUF_OWNER_OCTEON 0x00000001
|
||||
#define OCTEON_PCI_IO_BUF_OWNER_HOST 0x00000002
|
||||
|
||||
/** Can change without breaking ABI */
|
||||
#define CVMX_BOOTMEM_NUM_NAMED_BLOCKS 64
|
||||
|
||||
/** minimum alignment of bootmem alloced blocks */
|
||||
#define CVMX_BOOTMEM_ALIGNMENT_SIZE (16ull)
|
||||
|
||||
/** CVMX bootmem descriptor major version */
|
||||
#define CVMX_BOOTMEM_DESC_MAJ_VER 3
|
||||
/* CVMX bootmem descriptor minor version */
|
||||
#define CVMX_BOOTMEM_DESC_MIN_VER 0
|
||||
|
||||
/* Current versions */
|
||||
#define OCTEON_PCI_CONSOLE_MAJOR_VERSION 1
|
||||
#define OCTEON_PCI_CONSOLE_MINOR_VERSION 0
|
||||
#define OCTEON_PCI_CONSOLE_BLOCK_NAME "__pci_console"
|
||||
#define OCTEON_CONSOLE_POLL_INTERVAL_MS 100 /* 10 times per second */
|
||||
|
||||
/* First three members of cvmx_bootmem_desc are left in original
|
||||
** positions for backwards compatibility.
|
||||
** Assumes big endian target
|
||||
*/
|
||||
struct cvmx_bootmem_desc {
|
||||
/** spinlock to control access to list */
|
||||
u32 lock;
|
||||
|
||||
/** flags for indicating various conditions */
|
||||
u32 flags;
|
||||
|
||||
u64 head_addr;
|
||||
|
||||
/** incremented changed when incompatible changes made */
|
||||
u32 major_version;
|
||||
|
||||
/** incremented changed when compatible changes made,
|
||||
* reset to zero when major incremented
|
||||
*/
|
||||
u32 minor_version;
|
||||
|
||||
u64 app_data_addr;
|
||||
u64 app_data_size;
|
||||
|
||||
/** number of elements in named blocks array */
|
||||
u32 nb_num_blocks;
|
||||
|
||||
/** length of name array in bootmem blocks */
|
||||
u32 named_block_name_len;
|
||||
|
||||
/** address of named memory block descriptors */
|
||||
u64 named_block_array_addr;
|
||||
};
|
||||
|
||||
/* Structure that defines a single console.
|
||||
*
|
||||
* Note: when read_index == write_index, the buffer is empty.
|
||||
* The actual usable size of each console is console_buf_size -1;
|
||||
*/
|
||||
struct octeon_pci_console {
|
||||
u64 input_base_addr;
|
||||
u32 input_read_index;
|
||||
u32 input_write_index;
|
||||
u64 output_base_addr;
|
||||
u32 output_read_index;
|
||||
u32 output_write_index;
|
||||
u32 lock;
|
||||
u32 buf_size;
|
||||
};
|
||||
|
||||
/* This is the main container structure that contains all the information
|
||||
* about all PCI consoles. The address of this structure is passed to various
|
||||
* routines that operation on PCI consoles.
|
||||
*/
|
||||
struct octeon_pci_console_desc {
|
||||
u32 major_version;
|
||||
u32 minor_version;
|
||||
u32 lock;
|
||||
u32 flags;
|
||||
u32 num_consoles;
|
||||
u32 pad;
|
||||
/* must be 64 bit aligned here... */
|
||||
/* Array of addresses of octeon_pci_console structures */
|
||||
u64 console_addr_array[0];
|
||||
/* Implicit storage for console_addr_array */
|
||||
};
|
||||
|
||||
/**
|
||||
* This macro returns the size of a member of a structure.
|
||||
* Logically it is the same as "sizeof(s::field)" in C++, but
|
||||
* C lacks the "::" operator.
|
||||
*/
|
||||
#define SIZEOF_FIELD(s, field) sizeof(((s *)NULL)->field)
|
||||
|
||||
/**
|
||||
* This macro returns a member of the cvmx_bootmem_desc
|
||||
* structure. These members can't be directly addressed as
|
||||
* they might be in memory not directly reachable. In the case
|
||||
* where bootmem is compiled with LINUX_HOST, the structure
|
||||
* itself might be located on a remote Octeon. The argument
|
||||
* "field" is the member name of the cvmx_bootmem_desc to read.
|
||||
* Regardless of the type of the field, the return type is always
|
||||
* a u64.
|
||||
*/
|
||||
#define CVMX_BOOTMEM_DESC_GET_FIELD(oct, field) \
|
||||
__cvmx_bootmem_desc_get(oct, oct->bootmem_desc_addr, \
|
||||
offsetof(struct cvmx_bootmem_desc, field), \
|
||||
SIZEOF_FIELD(struct cvmx_bootmem_desc, field))
|
||||
|
||||
#define __cvmx_bootmem_lock(flags)
|
||||
#define __cvmx_bootmem_unlock(flags)
|
||||
|
||||
/**
|
||||
* This macro returns a member of the
|
||||
* cvmx_bootmem_named_block_desc structure. These members can't
|
||||
* be directly addressed as they might be in memory not directly
|
||||
* reachable. In the case where bootmem is compiled with
|
||||
* LINUX_HOST, the structure itself might be located on a remote
|
||||
* Octeon. The argument "field" is the member name of the
|
||||
* cvmx_bootmem_named_block_desc to read. Regardless of the type
|
||||
* of the field, the return type is always a u64. The "addr"
|
||||
* parameter is the physical address of the structure.
|
||||
*/
|
||||
#define CVMX_BOOTMEM_NAMED_GET_FIELD(oct, addr, field) \
|
||||
__cvmx_bootmem_desc_get(oct, addr, \
|
||||
offsetof(struct cvmx_bootmem_named_block_desc, field), \
|
||||
SIZEOF_FIELD(struct cvmx_bootmem_named_block_desc, field))
|
||||
|
||||
/**
|
||||
* This function is the implementation of the get macros defined
|
||||
* for individual structure members. The argument are generated
|
||||
* by the macros inorder to read only the needed memory.
|
||||
*
|
||||
* @param oct Pointer to current octeon device
|
||||
* @param base 64bit physical address of the complete structure
|
||||
* @param offset Offset from the beginning of the structure to the member being
|
||||
* accessed.
|
||||
* @param size Size of the structure member.
|
||||
*
|
||||
* @return Value of the structure member promoted into a u64.
|
||||
*/
|
||||
static inline u64 __cvmx_bootmem_desc_get(struct octeon_device *oct,
|
||||
u64 base,
|
||||
u32 offset,
|
||||
u32 size)
|
||||
{
|
||||
base = (1ull << 63) | (base + offset);
|
||||
switch (size) {
|
||||
case 4:
|
||||
return octeon_read_device_mem32(oct, base);
|
||||
case 8:
|
||||
return octeon_read_device_mem64(oct, base);
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* This function retrieves the string name of a named block. It is
|
||||
* more complicated than a simple memcpy() since the named block
|
||||
* descriptor may not be directly accessible.
|
||||
*
|
||||
* @param addr Physical address of the named block descriptor
|
||||
* @param str String to receive the named block string name
|
||||
* @param len Length of the string buffer, which must match the length
|
||||
* stored in the bootmem descriptor.
|
||||
*/
|
||||
static void CVMX_BOOTMEM_NAMED_GET_NAME(struct octeon_device *oct,
|
||||
u64 addr,
|
||||
char *str,
|
||||
u32 len)
|
||||
{
|
||||
addr += offsetof(struct cvmx_bootmem_named_block_desc, name);
|
||||
octeon_pci_read_core_mem(oct, addr, str, len);
|
||||
str[len] = 0;
|
||||
}
|
||||
|
||||
/* See header file for descriptions of functions */
|
||||
|
||||
/**
|
||||
* Check the version information on the bootmem descriptor
|
||||
*
|
||||
* @param exact_match
|
||||
* Exact major version to check against. A zero means
|
||||
* check that the version supports named blocks.
|
||||
*
|
||||
* @return Zero if the version is correct. Negative if the version is
|
||||
* incorrect. Failures also cause a message to be displayed.
|
||||
*/
|
||||
static int __cvmx_bootmem_check_version(struct octeon_device *oct,
|
||||
u32 exact_match)
|
||||
{
|
||||
u32 major_version;
|
||||
u32 minor_version;
|
||||
|
||||
if (!oct->bootmem_desc_addr)
|
||||
oct->bootmem_desc_addr =
|
||||
octeon_read_device_mem64(oct,
|
||||
BOOTLOADER_PCI_READ_DESC_ADDR);
|
||||
major_version =
|
||||
(u32)CVMX_BOOTMEM_DESC_GET_FIELD(oct, major_version);
|
||||
minor_version =
|
||||
(u32)CVMX_BOOTMEM_DESC_GET_FIELD(oct, minor_version);
|
||||
dev_dbg(&oct->pci_dev->dev, "%s: major_version=%d\n", __func__,
|
||||
major_version);
|
||||
if ((major_version > 3) ||
|
||||
(exact_match && major_version != exact_match)) {
|
||||
dev_err(&oct->pci_dev->dev, "bootmem ver mismatch %d.%d addr:0x%llx\n",
|
||||
major_version, minor_version,
|
||||
CAST_ULL(oct->bootmem_desc_addr));
|
||||
return -1;
|
||||
} else {
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
static const struct cvmx_bootmem_named_block_desc
|
||||
*__cvmx_bootmem_find_named_block_flags(struct octeon_device *oct,
|
||||
const char *name, u32 flags)
|
||||
{
|
||||
struct cvmx_bootmem_named_block_desc *desc =
|
||||
&oct->bootmem_named_block_desc;
|
||||
u64 named_addr = cvmx_bootmem_phy_named_block_find(oct, name, flags);
|
||||
|
||||
if (named_addr) {
|
||||
desc->base_addr = CVMX_BOOTMEM_NAMED_GET_FIELD(oct, named_addr,
|
||||
base_addr);
|
||||
desc->size =
|
||||
CVMX_BOOTMEM_NAMED_GET_FIELD(oct, named_addr, size);
|
||||
strncpy(desc->name, name, sizeof(desc->name));
|
||||
desc->name[sizeof(desc->name) - 1] = 0;
|
||||
return &oct->bootmem_named_block_desc;
|
||||
} else {
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static u64 cvmx_bootmem_phy_named_block_find(struct octeon_device *oct,
|
||||
const char *name,
|
||||
u32 flags)
|
||||
{
|
||||
u64 result = 0;
|
||||
|
||||
__cvmx_bootmem_lock(flags);
|
||||
if (!__cvmx_bootmem_check_version(oct, 3)) {
|
||||
u32 i;
|
||||
u64 named_block_array_addr =
|
||||
CVMX_BOOTMEM_DESC_GET_FIELD(oct,
|
||||
named_block_array_addr);
|
||||
u32 num_blocks = (u32)
|
||||
CVMX_BOOTMEM_DESC_GET_FIELD(oct, nb_num_blocks);
|
||||
u32 name_length = (u32)
|
||||
CVMX_BOOTMEM_DESC_GET_FIELD(oct, named_block_name_len);
|
||||
u64 named_addr = named_block_array_addr;
|
||||
|
||||
for (i = 0; i < num_blocks; i++) {
|
||||
u64 named_size =
|
||||
CVMX_BOOTMEM_NAMED_GET_FIELD(oct, named_addr,
|
||||
size);
|
||||
if (name && named_size) {
|
||||
char *name_tmp =
|
||||
kmalloc(name_length + 1, GFP_KERNEL);
|
||||
CVMX_BOOTMEM_NAMED_GET_NAME(oct, named_addr,
|
||||
name_tmp,
|
||||
name_length);
|
||||
if (!strncmp(name, name_tmp, name_length)) {
|
||||
result = named_addr;
|
||||
kfree(name_tmp);
|
||||
break;
|
||||
}
|
||||
kfree(name_tmp);
|
||||
} else if (!name && !named_size) {
|
||||
result = named_addr;
|
||||
break;
|
||||
}
|
||||
|
||||
named_addr +=
|
||||
sizeof(struct cvmx_bootmem_named_block_desc);
|
||||
}
|
||||
}
|
||||
__cvmx_bootmem_unlock(flags);
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Find a named block on the remote Octeon
|
||||
*
|
||||
* @param name Name of block to find
|
||||
* @param base_addr Address the block is at (OUTPUT)
|
||||
* @param size The size of the block (OUTPUT)
|
||||
*
|
||||
* @return Zero on success, One on failure.
|
||||
*/
|
||||
static int octeon_named_block_find(struct octeon_device *oct, const char *name,
|
||||
u64 *base_addr, u64 *size)
|
||||
{
|
||||
const struct cvmx_bootmem_named_block_desc *named_block;
|
||||
|
||||
octeon_remote_lock();
|
||||
named_block = __cvmx_bootmem_find_named_block_flags(oct, name, 0);
|
||||
octeon_remote_unlock();
|
||||
if (named_block) {
|
||||
*base_addr = named_block->base_addr;
|
||||
*size = named_block->size;
|
||||
return 0;
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
static void octeon_remote_lock(void)
|
||||
{
|
||||
/* fill this in if any sharing is needed */
|
||||
}
|
||||
|
||||
static void octeon_remote_unlock(void)
|
||||
{
|
||||
/* fill this in if any sharing is needed */
|
||||
}
|
||||
|
||||
int octeon_console_send_cmd(struct octeon_device *oct, char *cmd_str,
|
||||
u32 wait_hundredths)
|
||||
{
|
||||
u32 len = strlen(cmd_str);
|
||||
|
||||
dev_dbg(&oct->pci_dev->dev, "sending \"%s\" to bootloader\n", cmd_str);
|
||||
|
||||
if (len > BOOTLOADER_PCI_WRITE_BUFFER_STR_LEN - 1) {
|
||||
dev_err(&oct->pci_dev->dev, "Command string too long, max length is: %d\n",
|
||||
BOOTLOADER_PCI_WRITE_BUFFER_STR_LEN - 1);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (octeon_wait_for_bootloader(oct, wait_hundredths) != 0) {
|
||||
dev_err(&oct->pci_dev->dev, "Bootloader not ready for command.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Write command to bootloader */
|
||||
octeon_remote_lock();
|
||||
octeon_pci_write_core_mem(oct, BOOTLOADER_PCI_READ_BUFFER_DATA_ADDR,
|
||||
(u8 *)cmd_str, len);
|
||||
octeon_write_device_mem32(oct, BOOTLOADER_PCI_READ_BUFFER_LEN_ADDR,
|
||||
len);
|
||||
octeon_write_device_mem32(oct, BOOTLOADER_PCI_READ_BUFFER_OWNER_ADDR,
|
||||
OCTEON_PCI_IO_BUF_OWNER_OCTEON);
|
||||
|
||||
/* Bootloader should accept command very quickly
|
||||
* if it really was ready
|
||||
*/
|
||||
if (octeon_wait_for_bootloader(oct, 200) != 0) {
|
||||
octeon_remote_unlock();
|
||||
dev_err(&oct->pci_dev->dev, "Bootloader did not accept command.\n");
|
||||
return -1;
|
||||
}
|
||||
octeon_remote_unlock();
|
||||
return 0;
|
||||
}
|
||||
|
||||
int octeon_wait_for_bootloader(struct octeon_device *oct,
|
||||
u32 wait_time_hundredths)
|
||||
{
|
||||
dev_dbg(&oct->pci_dev->dev, "waiting %d0 ms for bootloader\n",
|
||||
wait_time_hundredths);
|
||||
|
||||
if (octeon_mem_access_ok(oct))
|
||||
return -1;
|
||||
|
||||
while (wait_time_hundredths > 0 &&
|
||||
octeon_read_device_mem32(oct,
|
||||
BOOTLOADER_PCI_READ_BUFFER_OWNER_ADDR)
|
||||
!= OCTEON_PCI_IO_BUF_OWNER_HOST) {
|
||||
if (--wait_time_hundredths <= 0)
|
||||
return -1;
|
||||
schedule_timeout_uninterruptible(HZ / 100);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void octeon_console_handle_result(struct octeon_device *oct,
|
||||
size_t console_num,
|
||||
char *buffer, s32 bytes_read)
|
||||
{
|
||||
struct octeon_console *console;
|
||||
|
||||
console = &oct->console[console_num];
|
||||
|
||||
console->waiting = 0;
|
||||
}
|
||||
|
||||
static char console_buffer[OCTEON_CONSOLE_MAX_READ_BYTES];
|
||||
|
||||
static void output_console_line(struct octeon_device *oct,
|
||||
struct octeon_console *console,
|
||||
size_t console_num,
|
||||
char *console_buffer,
|
||||
s32 bytes_read)
|
||||
{
|
||||
char *line;
|
||||
s32 i;
|
||||
|
||||
line = console_buffer;
|
||||
for (i = 0; i < bytes_read; i++) {
|
||||
/* Output a line at a time, prefixed */
|
||||
if (console_buffer[i] == '\n') {
|
||||
console_buffer[i] = '\0';
|
||||
if (console->leftover[0]) {
|
||||
dev_info(&oct->pci_dev->dev, "%lu: %s%s\n",
|
||||
console_num, console->leftover,
|
||||
line);
|
||||
console->leftover[0] = '\0';
|
||||
} else {
|
||||
dev_info(&oct->pci_dev->dev, "%lu: %s\n",
|
||||
console_num, line);
|
||||
}
|
||||
line = &console_buffer[i + 1];
|
||||
}
|
||||
}
|
||||
|
||||
/* Save off any leftovers */
|
||||
if (line != &console_buffer[bytes_read]) {
|
||||
console_buffer[bytes_read] = '\0';
|
||||
strcpy(console->leftover, line);
|
||||
}
|
||||
}
|
||||
|
||||
static void check_console(struct work_struct *work)
|
||||
{
|
||||
s32 bytes_read, tries, total_read;
|
||||
struct octeon_console *console;
|
||||
struct cavium_wk *wk = (struct cavium_wk *)work;
|
||||
struct octeon_device *oct = (struct octeon_device *)wk->ctxptr;
|
||||
size_t console_num = wk->ctxul;
|
||||
u32 delay;
|
||||
|
||||
console = &oct->console[console_num];
|
||||
tries = 0;
|
||||
total_read = 0;
|
||||
|
||||
do {
|
||||
/* Take console output regardless of whether it will
|
||||
* be logged
|
||||
*/
|
||||
bytes_read =
|
||||
octeon_console_read(oct, console_num, console_buffer,
|
||||
sizeof(console_buffer) - 1, 0);
|
||||
if (bytes_read > 0) {
|
||||
total_read += bytes_read;
|
||||
if (console->waiting) {
|
||||
octeon_console_handle_result(oct, console_num,
|
||||
console_buffer,
|
||||
bytes_read);
|
||||
}
|
||||
if (octeon_console_debug_enabled(console_num)) {
|
||||
output_console_line(oct, console, console_num,
|
||||
console_buffer, bytes_read);
|
||||
}
|
||||
} else if (bytes_read < 0) {
|
||||
dev_err(&oct->pci_dev->dev, "Error reading console %lu, ret=%d\n",
|
||||
console_num, bytes_read);
|
||||
}
|
||||
|
||||
tries++;
|
||||
} while ((bytes_read > 0) && (tries < 16));
|
||||
|
||||
/* If nothing is read after polling the console,
|
||||
* output any leftovers if any
|
||||
*/
|
||||
if (octeon_console_debug_enabled(console_num) &&
|
||||
(total_read == 0) && (console->leftover[0])) {
|
||||
dev_info(&oct->pci_dev->dev, "%lu: %s\n",
|
||||
console_num, console->leftover);
|
||||
console->leftover[0] = '\0';
|
||||
}
|
||||
|
||||
delay = OCTEON_CONSOLE_POLL_INTERVAL_MS;
|
||||
|
||||
schedule_delayed_work(&wk->work, msecs_to_jiffies(delay));
|
||||
}
|
||||
|
||||
int octeon_init_consoles(struct octeon_device *oct)
|
||||
{
|
||||
int ret = 0;
|
||||
u64 addr, size;
|
||||
|
||||
ret = octeon_mem_access_ok(oct);
|
||||
if (ret) {
|
||||
dev_err(&oct->pci_dev->dev, "Memory access not okay'\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = octeon_named_block_find(oct, OCTEON_PCI_CONSOLE_BLOCK_NAME, &addr,
|
||||
&size);
|
||||
if (ret) {
|
||||
dev_err(&oct->pci_dev->dev, "Could not find console '%s'\n",
|
||||
OCTEON_PCI_CONSOLE_BLOCK_NAME);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* num_consoles > 0, is an indication that the consoles
|
||||
* are accessible
|
||||
*/
|
||||
oct->num_consoles = octeon_read_device_mem32(oct,
|
||||
addr + offsetof(struct octeon_pci_console_desc,
|
||||
num_consoles));
|
||||
oct->console_desc_addr = addr;
|
||||
|
||||
dev_dbg(&oct->pci_dev->dev, "Initialized consoles. %d available\n",
|
||||
oct->num_consoles);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int octeon_add_console(struct octeon_device *oct, u32 console_num)
|
||||
{
|
||||
int ret = 0;
|
||||
u32 delay;
|
||||
u64 coreaddr;
|
||||
struct delayed_work *work;
|
||||
struct octeon_console *console;
|
||||
|
||||
if (console_num >= oct->num_consoles) {
|
||||
dev_err(&oct->pci_dev->dev,
|
||||
"trying to read from console number %d when only 0 to %d exist\n",
|
||||
console_num, oct->num_consoles);
|
||||
} else {
|
||||
console = &oct->console[console_num];
|
||||
|
||||
console->waiting = 0;
|
||||
|
||||
coreaddr = oct->console_desc_addr + console_num * 8 +
|
||||
offsetof(struct octeon_pci_console_desc,
|
||||
console_addr_array);
|
||||
console->addr = octeon_read_device_mem64(oct, coreaddr);
|
||||
coreaddr = console->addr + offsetof(struct octeon_pci_console,
|
||||
buf_size);
|
||||
console->buffer_size = octeon_read_device_mem32(oct, coreaddr);
|
||||
coreaddr = console->addr + offsetof(struct octeon_pci_console,
|
||||
input_base_addr);
|
||||
console->input_base_addr =
|
||||
octeon_read_device_mem64(oct, coreaddr);
|
||||
coreaddr = console->addr + offsetof(struct octeon_pci_console,
|
||||
output_base_addr);
|
||||
console->output_base_addr =
|
||||
octeon_read_device_mem64(oct, coreaddr);
|
||||
console->leftover[0] = '\0';
|
||||
|
||||
work = &oct->console_poll_work[console_num].work;
|
||||
|
||||
INIT_DELAYED_WORK(work, check_console);
|
||||
oct->console_poll_work[console_num].ctxptr = (void *)oct;
|
||||
oct->console_poll_work[console_num].ctxul = console_num;
|
||||
delay = OCTEON_CONSOLE_POLL_INTERVAL_MS;
|
||||
schedule_delayed_work(work, msecs_to_jiffies(delay));
|
||||
|
||||
if (octeon_console_debug_enabled(console_num)) {
|
||||
ret = octeon_console_send_cmd(oct,
|
||||
"setenv pci_console_active 1",
|
||||
2000);
|
||||
}
|
||||
|
||||
console->active = 1;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* Removes all consoles
|
||||
*
|
||||
* @param oct octeon device
|
||||
*/
|
||||
void octeon_remove_consoles(struct octeon_device *oct)
|
||||
{
|
||||
u32 i;
|
||||
struct octeon_console *console;
|
||||
|
||||
for (i = 0; i < oct->num_consoles; i++) {
|
||||
console = &oct->console[i];
|
||||
|
||||
if (!console->active)
|
||||
continue;
|
||||
|
||||
cancel_delayed_work_sync(&oct->console_poll_work[i].
|
||||
work);
|
||||
console->addr = 0;
|
||||
console->buffer_size = 0;
|
||||
console->input_base_addr = 0;
|
||||
console->output_base_addr = 0;
|
||||
}
|
||||
|
||||
oct->num_consoles = 0;
|
||||
}
|
||||
|
||||
static inline int octeon_console_free_bytes(u32 buffer_size,
|
||||
u32 wr_idx,
|
||||
u32 rd_idx)
|
||||
{
|
||||
if (rd_idx >= buffer_size || wr_idx >= buffer_size)
|
||||
return -1;
|
||||
|
||||
return ((buffer_size - 1) - (wr_idx - rd_idx)) % buffer_size;
|
||||
}
|
||||
|
||||
static inline int octeon_console_avail_bytes(u32 buffer_size,
|
||||
u32 wr_idx,
|
||||
u32 rd_idx)
|
||||
{
|
||||
if (rd_idx >= buffer_size || wr_idx >= buffer_size)
|
||||
return -1;
|
||||
|
||||
return buffer_size - 1 -
|
||||
octeon_console_free_bytes(buffer_size, wr_idx, rd_idx);
|
||||
}
|
||||
|
||||
int octeon_console_read(struct octeon_device *oct, u32 console_num,
|
||||
char *buffer, u32 buf_size, u32 flags)
|
||||
{
|
||||
int bytes_to_read;
|
||||
u32 rd_idx, wr_idx;
|
||||
struct octeon_console *console;
|
||||
|
||||
if (console_num >= oct->num_consoles) {
|
||||
dev_err(&oct->pci_dev->dev, "Attempted to read from disabled console %d\n",
|
||||
console_num);
|
||||
return 0;
|
||||
}
|
||||
|
||||
console = &oct->console[console_num];
|
||||
|
||||
/* Check to see if any data is available.
|
||||
* Maybe optimize this with 64-bit read.
|
||||
*/
|
||||
rd_idx = octeon_read_device_mem32(oct, console->addr +
|
||||
offsetof(struct octeon_pci_console, output_read_index));
|
||||
wr_idx = octeon_read_device_mem32(oct, console->addr +
|
||||
offsetof(struct octeon_pci_console, output_write_index));
|
||||
|
||||
bytes_to_read = octeon_console_avail_bytes(console->buffer_size,
|
||||
wr_idx, rd_idx);
|
||||
if (bytes_to_read <= 0)
|
||||
return bytes_to_read;
|
||||
|
||||
bytes_to_read = MIN(bytes_to_read, (s32)buf_size);
|
||||
|
||||
/* Check to see if what we want to read is not contiguous, and limit
|
||||
* ourselves to the contiguous block
|
||||
*/
|
||||
if (rd_idx + bytes_to_read >= console->buffer_size)
|
||||
bytes_to_read = console->buffer_size - rd_idx;
|
||||
|
||||
octeon_pci_read_core_mem(oct, console->output_base_addr + rd_idx,
|
||||
buffer, bytes_to_read);
|
||||
octeon_write_device_mem32(oct, console->addr +
|
||||
offsetof(struct octeon_pci_console,
|
||||
output_read_index),
|
||||
(rd_idx + bytes_to_read) %
|
||||
console->buffer_size);
|
||||
|
||||
return bytes_to_read;
|
||||
}
|
1307
drivers/net/ethernet/cavium/liquidio/octeon_device.c
Normal file
1307
drivers/net/ethernet/cavium/liquidio/octeon_device.c
Normal file
File diff suppressed because it is too large
Load diff
649
drivers/net/ethernet/cavium/liquidio/octeon_device.h
Normal file
649
drivers/net/ethernet/cavium/liquidio/octeon_device.h
Normal file
|
@ -0,0 +1,649 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
|
||||
/*! \file octeon_device.h
|
||||
* \brief Host Driver: This file defines the octeon device structure.
|
||||
*/
|
||||
|
||||
#ifndef _OCTEON_DEVICE_H_
|
||||
#define _OCTEON_DEVICE_H_
|
||||
|
||||
/** PCI VendorId Device Id */
|
||||
#define OCTEON_CN68XX_PCIID 0x91177d
|
||||
#define OCTEON_CN66XX_PCIID 0x92177d
|
||||
|
||||
/** Driver identifies chips by these Ids, created by clubbing together
|
||||
* DeviceId+RevisionId; Where Revision Id is not used to distinguish
|
||||
* between chips, a value of 0 is used for revision id.
|
||||
*/
|
||||
#define OCTEON_CN68XX 0x0091
|
||||
#define OCTEON_CN66XX 0x0092
|
||||
|
||||
/** Endian-swap modes supported by Octeon. */
|
||||
enum octeon_pci_swap_mode {
|
||||
OCTEON_PCI_PASSTHROUGH = 0,
|
||||
OCTEON_PCI_64BIT_SWAP = 1,
|
||||
OCTEON_PCI_32BIT_BYTE_SWAP = 2,
|
||||
OCTEON_PCI_32BIT_LW_SWAP = 3
|
||||
};
|
||||
|
||||
/*--------------- PCI BAR1 index registers -------------*/
|
||||
|
||||
/* BAR1 Mask */
|
||||
#define PCI_BAR1_ENABLE_CA 1
|
||||
#define PCI_BAR1_ENDIAN_MODE OCTEON_PCI_64BIT_SWAP
|
||||
#define PCI_BAR1_ENTRY_VALID 1
|
||||
#define PCI_BAR1_MASK ((PCI_BAR1_ENABLE_CA << 3) \
|
||||
| (PCI_BAR1_ENDIAN_MODE << 1) \
|
||||
| PCI_BAR1_ENTRY_VALID)
|
||||
|
||||
/** Octeon Device state.
|
||||
* Each octeon device goes through each of these states
|
||||
* as it is initialized.
|
||||
*/
|
||||
#define OCT_DEV_BEGIN_STATE 0x0
|
||||
#define OCT_DEV_PCI_MAP_DONE 0x1
|
||||
#define OCT_DEV_DISPATCH_INIT_DONE 0x2
|
||||
#define OCT_DEV_INSTR_QUEUE_INIT_DONE 0x3
|
||||
#define OCT_DEV_SC_BUFF_POOL_INIT_DONE 0x4
|
||||
#define OCT_DEV_RESP_LIST_INIT_DONE 0x5
|
||||
#define OCT_DEV_DROQ_INIT_DONE 0x6
|
||||
#define OCT_DEV_IO_QUEUES_DONE 0x7
|
||||
#define OCT_DEV_CONSOLE_INIT_DONE 0x8
|
||||
#define OCT_DEV_HOST_OK 0x9
|
||||
#define OCT_DEV_CORE_OK 0xa
|
||||
#define OCT_DEV_RUNNING 0xb
|
||||
#define OCT_DEV_IN_RESET 0xc
|
||||
#define OCT_DEV_STATE_INVALID 0xd
|
||||
|
||||
#define OCT_DEV_STATES OCT_DEV_STATE_INVALID
|
||||
|
||||
/** Octeon Device interrupts
|
||||
* These interrupt bits are set in int_status filed of
|
||||
* octeon_device structure
|
||||
*/
|
||||
#define OCT_DEV_INTR_DMA0_FORCE 0x01
|
||||
#define OCT_DEV_INTR_DMA1_FORCE 0x02
|
||||
#define OCT_DEV_INTR_PKT_DATA 0x04
|
||||
|
||||
#define LIO_RESET_SECS (3)
|
||||
|
||||
/*---------------------------DISPATCH LIST-------------------------------*/
|
||||
|
||||
/** The dispatch list entry.
|
||||
* The driver keeps a record of functions registered for each
|
||||
* response header opcode in this structure. Since the opcode is
|
||||
* hashed to index into the driver's list, more than one opcode
|
||||
* can hash to the same entry, in which case the list field points
|
||||
* to a linked list with the other entries.
|
||||
*/
|
||||
struct octeon_dispatch {
|
||||
/** List head for this entry */
|
||||
struct list_head list;
|
||||
|
||||
/** The opcode for which the dispatch function & arg should be used */
|
||||
u16 opcode;
|
||||
|
||||
/** The function to be called for a packet received by the driver */
|
||||
octeon_dispatch_fn_t dispatch_fn;
|
||||
|
||||
/* The application specified argument to be passed to the above
|
||||
* function along with the received packet
|
||||
*/
|
||||
void *arg;
|
||||
};
|
||||
|
||||
/** The dispatch list structure. */
|
||||
struct octeon_dispatch_list {
|
||||
/** access to dispatch list must be atomic */
|
||||
spinlock_t lock;
|
||||
|
||||
/** Count of dispatch functions currently registered */
|
||||
u32 count;
|
||||
|
||||
/** The list of dispatch functions */
|
||||
struct octeon_dispatch *dlist;
|
||||
};
|
||||
|
||||
/*----------------------- THE OCTEON DEVICE ---------------------------*/
|
||||
|
||||
#define OCT_MEM_REGIONS 3
|
||||
/** PCI address space mapping information.
|
||||
* Each of the 3 address spaces given by BAR0, BAR2 and BAR4 of
|
||||
* Octeon gets mapped to different physical address spaces in
|
||||
* the kernel.
|
||||
*/
|
||||
struct octeon_mmio {
|
||||
/** PCI address to which the BAR is mapped. */
|
||||
u64 start;
|
||||
|
||||
/** Length of this PCI address space. */
|
||||
u32 len;
|
||||
|
||||
/** Length that has been mapped to phys. address space. */
|
||||
u32 mapped_len;
|
||||
|
||||
/** The physical address to which the PCI address space is mapped. */
|
||||
u8 __iomem *hw_addr;
|
||||
|
||||
/** Flag indicating the mapping was successful. */
|
||||
u32 done;
|
||||
};
|
||||
|
||||
#define MAX_OCTEON_MAPS 32
|
||||
|
||||
struct octeon_io_enable {
|
||||
u32 iq;
|
||||
u32 oq;
|
||||
u32 iq64B;
|
||||
};
|
||||
|
||||
struct octeon_reg_list {
|
||||
u32 __iomem *pci_win_wr_addr_hi;
|
||||
u32 __iomem *pci_win_wr_addr_lo;
|
||||
u64 __iomem *pci_win_wr_addr;
|
||||
|
||||
u32 __iomem *pci_win_rd_addr_hi;
|
||||
u32 __iomem *pci_win_rd_addr_lo;
|
||||
u64 __iomem *pci_win_rd_addr;
|
||||
|
||||
u32 __iomem *pci_win_wr_data_hi;
|
||||
u32 __iomem *pci_win_wr_data_lo;
|
||||
u64 __iomem *pci_win_wr_data;
|
||||
|
||||
u32 __iomem *pci_win_rd_data_hi;
|
||||
u32 __iomem *pci_win_rd_data_lo;
|
||||
u64 __iomem *pci_win_rd_data;
|
||||
};
|
||||
|
||||
#define OCTEON_CONSOLE_MAX_READ_BYTES 512
|
||||
struct octeon_console {
|
||||
u32 active;
|
||||
u32 waiting;
|
||||
u64 addr;
|
||||
u32 buffer_size;
|
||||
u64 input_base_addr;
|
||||
u64 output_base_addr;
|
||||
char leftover[OCTEON_CONSOLE_MAX_READ_BYTES];
|
||||
};
|
||||
|
||||
struct octeon_board_info {
|
||||
char name[OCT_BOARD_NAME];
|
||||
char serial_number[OCT_SERIAL_LEN];
|
||||
u64 major;
|
||||
u64 minor;
|
||||
};
|
||||
|
||||
struct octeon_fn_list {
|
||||
void (*setup_iq_regs)(struct octeon_device *, u32);
|
||||
void (*setup_oq_regs)(struct octeon_device *, u32);
|
||||
|
||||
irqreturn_t (*process_interrupt_regs)(void *);
|
||||
int (*soft_reset)(struct octeon_device *);
|
||||
int (*setup_device_regs)(struct octeon_device *);
|
||||
void (*reinit_regs)(struct octeon_device *);
|
||||
void (*bar1_idx_setup)(struct octeon_device *, u64, u32, int);
|
||||
void (*bar1_idx_write)(struct octeon_device *, u32, u32);
|
||||
u32 (*bar1_idx_read)(struct octeon_device *, u32);
|
||||
u32 (*update_iq_read_idx)(struct octeon_device *,
|
||||
struct octeon_instr_queue *);
|
||||
|
||||
void (*enable_oq_pkt_time_intr)(struct octeon_device *, u32);
|
||||
void (*disable_oq_pkt_time_intr)(struct octeon_device *, u32);
|
||||
|
||||
void (*enable_interrupt)(void *);
|
||||
void (*disable_interrupt)(void *);
|
||||
|
||||
void (*enable_io_queues)(struct octeon_device *);
|
||||
void (*disable_io_queues)(struct octeon_device *);
|
||||
};
|
||||
|
||||
/* Must be multiple of 8, changing breaks ABI */
|
||||
#define CVMX_BOOTMEM_NAME_LEN 128
|
||||
|
||||
/* Structure for named memory blocks
|
||||
* Number of descriptors
|
||||
* available can be changed without affecting compatiblity,
|
||||
* but name length changes require a bump in the bootmem
|
||||
* descriptor version
|
||||
* Note: This structure must be naturally 64 bit aligned, as a single
|
||||
* memory image will be used by both 32 and 64 bit programs.
|
||||
*/
|
||||
struct cvmx_bootmem_named_block_desc {
|
||||
/** Base address of named block */
|
||||
u64 base_addr;
|
||||
|
||||
/** Size actually allocated for named block */
|
||||
u64 size;
|
||||
|
||||
/** name of named block */
|
||||
char name[CVMX_BOOTMEM_NAME_LEN];
|
||||
};
|
||||
|
||||
struct oct_fw_info {
|
||||
u32 max_nic_ports; /** max nic ports for the device */
|
||||
u32 num_gmx_ports; /** num gmx ports */
|
||||
u64 app_cap_flags; /** firmware cap flags */
|
||||
|
||||
/** The core application is running in this mode.
|
||||
* See octeon-drv-opcodes.h for values.
|
||||
*/
|
||||
u32 app_mode;
|
||||
char liquidio_firmware_version[32];
|
||||
};
|
||||
|
||||
/* wrappers around work structs */
|
||||
struct cavium_wk {
|
||||
struct delayed_work work;
|
||||
void *ctxptr;
|
||||
size_t ctxul;
|
||||
};
|
||||
|
||||
struct cavium_wq {
|
||||
struct workqueue_struct *wq;
|
||||
struct cavium_wk wk;
|
||||
};
|
||||
|
||||
struct octdev_props {
|
||||
/* Each interface in the Octeon device has a network
|
||||
* device pointer (used for OS specific calls).
|
||||
*/
|
||||
struct net_device *netdev;
|
||||
};
|
||||
|
||||
/** The Octeon device.
|
||||
* Each Octeon device has this structure to represent all its
|
||||
* components.
|
||||
*/
|
||||
struct octeon_device {
|
||||
/** Lock for PCI window configuration accesses */
|
||||
spinlock_t pci_win_lock;
|
||||
|
||||
/** Lock for memory accesses */
|
||||
spinlock_t mem_access_lock;
|
||||
|
||||
/** PCI device pointer */
|
||||
struct pci_dev *pci_dev;
|
||||
|
||||
/** Chip specific information. */
|
||||
void *chip;
|
||||
|
||||
/** Number of interfaces detected in this octeon device. */
|
||||
u32 ifcount;
|
||||
|
||||
struct octdev_props props[MAX_OCTEON_LINKS];
|
||||
|
||||
/** Octeon Chip type. */
|
||||
u16 chip_id;
|
||||
u16 rev_id;
|
||||
|
||||
/** This device's id - set by the driver. */
|
||||
u32 octeon_id;
|
||||
|
||||
/** This device's PCIe port used for traffic. */
|
||||
u16 pcie_port;
|
||||
|
||||
u16 flags;
|
||||
#define LIO_FLAG_MSI_ENABLED (u32)(1 << 1)
|
||||
#define LIO_FLAG_MSIX_ENABLED (u32)(1 << 2)
|
||||
|
||||
/** The state of this device */
|
||||
atomic_t status;
|
||||
|
||||
/** memory mapped io range */
|
||||
struct octeon_mmio mmio[OCT_MEM_REGIONS];
|
||||
|
||||
struct octeon_reg_list reg_list;
|
||||
|
||||
struct octeon_fn_list fn_list;
|
||||
|
||||
struct octeon_board_info boardinfo;
|
||||
|
||||
u32 num_iqs;
|
||||
|
||||
/* The pool containing pre allocated buffers used for soft commands */
|
||||
struct octeon_sc_buffer_pool sc_buf_pool;
|
||||
|
||||
/** The input instruction queues */
|
||||
struct octeon_instr_queue *instr_queue[MAX_OCTEON_INSTR_QUEUES];
|
||||
|
||||
/** The doubly-linked list of instruction response */
|
||||
struct octeon_response_list response_list[MAX_RESPONSE_LISTS];
|
||||
|
||||
u32 num_oqs;
|
||||
|
||||
/** The DROQ output queues */
|
||||
struct octeon_droq *droq[MAX_OCTEON_OUTPUT_QUEUES];
|
||||
|
||||
struct octeon_io_enable io_qmask;
|
||||
|
||||
/** List of dispatch functions */
|
||||
struct octeon_dispatch_list dispatch;
|
||||
|
||||
/* Interrupt Moderation */
|
||||
struct oct_intrmod_cfg intrmod;
|
||||
|
||||
u32 int_status;
|
||||
|
||||
u64 droq_intr;
|
||||
|
||||
/** Physical location of the cvmx_bootmem_desc_t in octeon memory */
|
||||
u64 bootmem_desc_addr;
|
||||
|
||||
/** Placeholder memory for named blocks.
|
||||
* Assumes single-threaded access
|
||||
*/
|
||||
struct cvmx_bootmem_named_block_desc bootmem_named_block_desc;
|
||||
|
||||
/** Address of consoles descriptor */
|
||||
u64 console_desc_addr;
|
||||
|
||||
/** Number of consoles available. 0 means they are inaccessible */
|
||||
u32 num_consoles;
|
||||
|
||||
/* Console caches */
|
||||
struct octeon_console console[MAX_OCTEON_MAPS];
|
||||
|
||||
/* Coprocessor clock rate. */
|
||||
u64 coproc_clock_rate;
|
||||
|
||||
/** The core application is running in this mode. See liquidio_common.h
|
||||
* for values.
|
||||
*/
|
||||
u32 app_mode;
|
||||
|
||||
struct oct_fw_info fw_info;
|
||||
|
||||
/** The name given to this device. */
|
||||
char device_name[32];
|
||||
|
||||
/** Application Context */
|
||||
void *app_ctx;
|
||||
|
||||
struct cavium_wq dma_comp_wq;
|
||||
|
||||
struct cavium_wq check_db_wq[MAX_OCTEON_INSTR_QUEUES];
|
||||
|
||||
struct cavium_wk nic_poll_work;
|
||||
|
||||
struct cavium_wk console_poll_work[MAX_OCTEON_MAPS];
|
||||
|
||||
void *priv;
|
||||
};
|
||||
|
||||
#define OCTEON_CN6XXX(oct) ((oct->chip_id == OCTEON_CN66XX) || \
|
||||
(oct->chip_id == OCTEON_CN68XX))
|
||||
#define CHIP_FIELD(oct, TYPE, field) \
|
||||
(((struct octeon_ ## TYPE *)(oct->chip))->field)
|
||||
|
||||
struct oct_intrmod_cmd {
|
||||
struct octeon_device *oct_dev;
|
||||
struct octeon_soft_command *sc;
|
||||
struct oct_intrmod_cfg *cfg;
|
||||
};
|
||||
|
||||
/*------------------ Function Prototypes ----------------------*/
|
||||
|
||||
/** Initialize device list memory */
|
||||
void octeon_init_device_list(int conf_type);
|
||||
|
||||
/** Free memory for Input and Output queue structures for a octeon device */
|
||||
void octeon_free_device_mem(struct octeon_device *);
|
||||
|
||||
/* Look up a free entry in the octeon_device table and allocate resources
|
||||
* for the octeon_device structure for an octeon device. Called at init
|
||||
* time.
|
||||
*/
|
||||
struct octeon_device *octeon_allocate_device(u32 pci_id,
|
||||
u32 priv_size);
|
||||
|
||||
/** Initialize the driver's dispatch list which is a mix of a hash table
|
||||
* and a linked list. This is done at driver load time.
|
||||
* @param octeon_dev - pointer to the octeon device structure.
|
||||
* @return 0 on success, else -ve error value
|
||||
*/
|
||||
int octeon_init_dispatch_list(struct octeon_device *octeon_dev);
|
||||
|
||||
/** Delete the driver's dispatch list and all registered entries.
|
||||
* This is done at driver unload time.
|
||||
* @param octeon_dev - pointer to the octeon device structure.
|
||||
*/
|
||||
void octeon_delete_dispatch_list(struct octeon_device *octeon_dev);
|
||||
|
||||
/** Initialize the core device fields with the info returned by the FW.
|
||||
* @param recv_info - Receive info structure
|
||||
* @param buf - Receive buffer
|
||||
*/
|
||||
int octeon_core_drv_init(struct octeon_recv_info *recv_info, void *buf);
|
||||
|
||||
/** Gets the dispatch function registered to receive packets with a
|
||||
* given opcode/subcode.
|
||||
* @param octeon_dev - the octeon device pointer.
|
||||
* @param opcode - the opcode for which the dispatch function
|
||||
* is to checked.
|
||||
* @param subcode - the subcode for which the dispatch function
|
||||
* is to checked.
|
||||
*
|
||||
* @return Success: octeon_dispatch_fn_t (dispatch function pointer)
|
||||
* @return Failure: NULL
|
||||
*
|
||||
* Looks up the dispatch list to get the dispatch function for a
|
||||
* given opcode.
|
||||
*/
|
||||
octeon_dispatch_fn_t
|
||||
octeon_get_dispatch(struct octeon_device *octeon_dev, u16 opcode,
|
||||
u16 subcode);
|
||||
|
||||
/** Get the octeon device pointer.
|
||||
* @param octeon_id - The id for which the octeon device pointer is required.
|
||||
* @return Success: Octeon device pointer.
|
||||
* @return Failure: NULL.
|
||||
*/
|
||||
struct octeon_device *lio_get_device(u32 octeon_id);
|
||||
|
||||
/** Get the octeon id assigned to the octeon device passed as argument.
|
||||
* This function is exported to other modules.
|
||||
* @param dev - octeon device pointer passed as a void *.
|
||||
* @return octeon device id
|
||||
*/
|
||||
int lio_get_device_id(void *dev);
|
||||
|
||||
static inline u16 OCTEON_MAJOR_REV(struct octeon_device *oct)
|
||||
{
|
||||
u16 rev = (oct->rev_id & 0xC) >> 2;
|
||||
|
||||
return (rev == 0) ? 1 : rev;
|
||||
}
|
||||
|
||||
static inline u16 OCTEON_MINOR_REV(struct octeon_device *oct)
|
||||
{
|
||||
return oct->rev_id & 0x3;
|
||||
}
|
||||
|
||||
/** Read windowed register.
|
||||
* @param oct - pointer to the Octeon device.
|
||||
* @param addr - Address of the register to read.
|
||||
*
|
||||
* This routine is called to read from the indirectly accessed
|
||||
* Octeon registers that are visible through a PCI BAR0 mapped window
|
||||
* register.
|
||||
* @return - 64 bit value read from the register.
|
||||
*/
|
||||
|
||||
u64 lio_pci_readq(struct octeon_device *oct, u64 addr);
|
||||
|
||||
/** Write windowed register.
|
||||
* @param oct - pointer to the Octeon device.
|
||||
* @param val - Value to write
|
||||
* @param addr - Address of the register to write
|
||||
*
|
||||
* This routine is called to write to the indirectly accessed
|
||||
* Octeon registers that are visible through a PCI BAR0 mapped window
|
||||
* register.
|
||||
* @return Nothing.
|
||||
*/
|
||||
void lio_pci_writeq(struct octeon_device *oct, u64 val, u64 addr);
|
||||
|
||||
/* Routines for reading and writing CSRs */
|
||||
#define octeon_write_csr(oct_dev, reg_off, value) \
|
||||
writel(value, oct_dev->mmio[0].hw_addr + reg_off)
|
||||
|
||||
#define octeon_write_csr64(oct_dev, reg_off, val64) \
|
||||
writeq(val64, oct_dev->mmio[0].hw_addr + reg_off)
|
||||
|
||||
#define octeon_read_csr(oct_dev, reg_off) \
|
||||
readl(oct_dev->mmio[0].hw_addr + reg_off)
|
||||
|
||||
#define octeon_read_csr64(oct_dev, reg_off) \
|
||||
readq(oct_dev->mmio[0].hw_addr + reg_off)
|
||||
|
||||
/**
|
||||
* Checks if memory access is okay
|
||||
*
|
||||
* @param oct which octeon to send to
|
||||
* @return Zero on success, negative on failure.
|
||||
*/
|
||||
int octeon_mem_access_ok(struct octeon_device *oct);
|
||||
|
||||
/**
|
||||
* Waits for DDR initialization.
|
||||
*
|
||||
* @param oct which octeon to send to
|
||||
* @param timeout_in_ms pointer to how long to wait until DDR is initialized
|
||||
* in ms.
|
||||
* If contents are 0, it waits until contents are non-zero
|
||||
* before starting to check.
|
||||
* @return Zero on success, negative on failure.
|
||||
*/
|
||||
int octeon_wait_for_ddr_init(struct octeon_device *oct,
|
||||
u32 *timeout_in_ms);
|
||||
|
||||
/**
|
||||
* Wait for u-boot to boot and be waiting for a command.
|
||||
*
|
||||
* @param wait_time_hundredths
|
||||
* Maximum time to wait
|
||||
*
|
||||
* @return Zero on success, negative on failure.
|
||||
*/
|
||||
int octeon_wait_for_bootloader(struct octeon_device *oct,
|
||||
u32 wait_time_hundredths);
|
||||
|
||||
/**
|
||||
* Initialize console access
|
||||
*
|
||||
* @param oct which octeon initialize
|
||||
* @return Zero on success, negative on failure.
|
||||
*/
|
||||
int octeon_init_consoles(struct octeon_device *oct);
|
||||
|
||||
/**
|
||||
* Adds access to a console to the device.
|
||||
*
|
||||
* @param oct which octeon to add to
|
||||
* @param console_num which console
|
||||
* @return Zero on success, negative on failure.
|
||||
*/
|
||||
int octeon_add_console(struct octeon_device *oct, u32 console_num);
|
||||
|
||||
/** write or read from a console */
|
||||
int octeon_console_write(struct octeon_device *oct, u32 console_num,
|
||||
char *buffer, u32 write_request_size, u32 flags);
|
||||
int octeon_console_write_avail(struct octeon_device *oct, u32 console_num);
|
||||
int octeon_console_read(struct octeon_device *oct, u32 console_num,
|
||||
char *buffer, u32 buf_size, u32 flags);
|
||||
int octeon_console_read_avail(struct octeon_device *oct, u32 console_num);
|
||||
|
||||
/** Removes all attached consoles. */
|
||||
void octeon_remove_consoles(struct octeon_device *oct);
|
||||
|
||||
/**
|
||||
* Send a string to u-boot on console 0 as a command.
|
||||
*
|
||||
* @param oct which octeon to send to
|
||||
* @param cmd_str String to send
|
||||
* @param wait_hundredths Time to wait for u-boot to accept the command.
|
||||
*
|
||||
* @return Zero on success, negative on failure.
|
||||
*/
|
||||
int octeon_console_send_cmd(struct octeon_device *oct, char *cmd_str,
|
||||
u32 wait_hundredths);
|
||||
|
||||
/** Parses, validates, and downloads firmware, then boots associated cores.
|
||||
* @param oct which octeon to download firmware to
|
||||
* @param data - The complete firmware file image
|
||||
* @param size - The size of the data
|
||||
*
|
||||
* @return 0 if success.
|
||||
* -EINVAL if file is incompatible or badly formatted.
|
||||
* -ENODEV if no handler was found for the application type or an
|
||||
* invalid octeon id was passed.
|
||||
*/
|
||||
int octeon_download_firmware(struct octeon_device *oct, const u8 *data,
|
||||
size_t size);
|
||||
|
||||
char *lio_get_state_string(atomic_t *state_ptr);
|
||||
|
||||
/** Sets up instruction queues for the device
|
||||
* @param oct which octeon to setup
|
||||
*
|
||||
* @return 0 if success. 1 if fails
|
||||
*/
|
||||
int octeon_setup_instr_queues(struct octeon_device *oct);
|
||||
|
||||
/** Sets up output queues for the device
|
||||
* @param oct which octeon to setup
|
||||
*
|
||||
* @return 0 if success. 1 if fails
|
||||
*/
|
||||
int octeon_setup_output_queues(struct octeon_device *oct);
|
||||
|
||||
int octeon_get_tx_qsize(struct octeon_device *oct, u32 q_no);
|
||||
|
||||
int octeon_get_rx_qsize(struct octeon_device *oct, u32 q_no);
|
||||
|
||||
/** Turns off the input and output queues for the device
|
||||
* @param oct which octeon to disable
|
||||
*/
|
||||
void octeon_set_io_queues_off(struct octeon_device *oct);
|
||||
|
||||
/** Turns on or off the given output queue for the device
|
||||
* @param oct which octeon to change
|
||||
* @param q_no which queue
|
||||
* @param enable 1 to enable, 0 to disable
|
||||
*/
|
||||
void octeon_set_droq_pkt_op(struct octeon_device *oct, u32 q_no, u32 enable);
|
||||
|
||||
/** Retrieve the config for the device
|
||||
* @param oct which octeon
|
||||
* @param card_type type of card
|
||||
*
|
||||
* @returns pointer to configuration
|
||||
*/
|
||||
void *oct_get_config_info(struct octeon_device *oct, u16 card_type);
|
||||
|
||||
/** Gets the octeon device configuration
|
||||
* @return - pointer to the octeon configuration struture
|
||||
*/
|
||||
struct octeon_config *octeon_get_conf(struct octeon_device *oct);
|
||||
|
||||
#endif
|
988
drivers/net/ethernet/cavium/liquidio/octeon_droq.c
Normal file
988
drivers/net/ethernet/cavium/liquidio/octeon_droq.c
Normal file
|
@ -0,0 +1,988 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
#include <linux/version.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include "octeon_config.h"
|
||||
#include "liquidio_common.h"
|
||||
#include "octeon_droq.h"
|
||||
#include "octeon_iq.h"
|
||||
#include "response_manager.h"
|
||||
#include "octeon_device.h"
|
||||
#include "octeon_nic.h"
|
||||
#include "octeon_main.h"
|
||||
#include "octeon_network.h"
|
||||
#include "cn66xx_regs.h"
|
||||
#include "cn66xx_device.h"
|
||||
#include "cn68xx_regs.h"
|
||||
#include "cn68xx_device.h"
|
||||
#include "liquidio_image.h"
|
||||
#include "octeon_mem_ops.h"
|
||||
|
||||
/* #define CAVIUM_ONLY_PERF_MODE */
|
||||
|
||||
#define CVM_MIN(d1, d2) (((d1) < (d2)) ? (d1) : (d2))
|
||||
#define CVM_MAX(d1, d2) (((d1) > (d2)) ? (d1) : (d2))
|
||||
|
||||
struct niclist {
|
||||
struct list_head list;
|
||||
void *ptr;
|
||||
};
|
||||
|
||||
struct __dispatch {
|
||||
struct list_head list;
|
||||
struct octeon_recv_info *rinfo;
|
||||
octeon_dispatch_fn_t disp_fn;
|
||||
};
|
||||
|
||||
/** Get the argument that the user set when registering dispatch
|
||||
* function for a given opcode/subcode.
|
||||
* @param octeon_dev - the octeon device pointer.
|
||||
* @param opcode - the opcode for which the dispatch argument
|
||||
* is to be checked.
|
||||
* @param subcode - the subcode for which the dispatch argument
|
||||
* is to be checked.
|
||||
* @return Success: void * (argument to the dispatch function)
|
||||
* @return Failure: NULL
|
||||
*
|
||||
*/
|
||||
static inline void *octeon_get_dispatch_arg(struct octeon_device *octeon_dev,
|
||||
u16 opcode, u16 subcode)
|
||||
{
|
||||
int idx;
|
||||
struct list_head *dispatch;
|
||||
void *fn_arg = NULL;
|
||||
u16 combined_opcode = OPCODE_SUBCODE(opcode, subcode);
|
||||
|
||||
idx = combined_opcode & OCTEON_OPCODE_MASK;
|
||||
|
||||
spin_lock_bh(&octeon_dev->dispatch.lock);
|
||||
|
||||
if (octeon_dev->dispatch.count == 0) {
|
||||
spin_unlock_bh(&octeon_dev->dispatch.lock);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (octeon_dev->dispatch.dlist[idx].opcode == combined_opcode) {
|
||||
fn_arg = octeon_dev->dispatch.dlist[idx].arg;
|
||||
} else {
|
||||
list_for_each(dispatch,
|
||||
&octeon_dev->dispatch.dlist[idx].list) {
|
||||
if (((struct octeon_dispatch *)dispatch)->opcode ==
|
||||
combined_opcode) {
|
||||
fn_arg = ((struct octeon_dispatch *)
|
||||
dispatch)->arg;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
spin_unlock_bh(&octeon_dev->dispatch.lock);
|
||||
return fn_arg;
|
||||
}
|
||||
|
||||
u32 octeon_droq_check_hw_for_pkts(struct octeon_device *oct,
|
||||
struct octeon_droq *droq)
|
||||
{
|
||||
u32 pkt_count = 0;
|
||||
|
||||
pkt_count = readl(droq->pkts_sent_reg);
|
||||
if (pkt_count) {
|
||||
atomic_add(pkt_count, &droq->pkts_pending);
|
||||
writel(pkt_count, droq->pkts_sent_reg);
|
||||
}
|
||||
|
||||
return pkt_count;
|
||||
}
|
||||
|
||||
static void octeon_droq_compute_max_packet_bufs(struct octeon_droq *droq)
|
||||
{
|
||||
u32 count = 0;
|
||||
|
||||
/* max_empty_descs is the max. no. of descs that can have no buffers.
|
||||
* If the empty desc count goes beyond this value, we cannot safely
|
||||
* read in a 64K packet sent by Octeon
|
||||
* (64K is max pkt size from Octeon)
|
||||
*/
|
||||
droq->max_empty_descs = 0;
|
||||
|
||||
do {
|
||||
droq->max_empty_descs++;
|
||||
count += droq->buffer_size;
|
||||
} while (count < (64 * 1024));
|
||||
|
||||
droq->max_empty_descs = droq->max_count - droq->max_empty_descs;
|
||||
}
|
||||
|
||||
static void octeon_droq_reset_indices(struct octeon_droq *droq)
|
||||
{
|
||||
droq->read_idx = 0;
|
||||
droq->write_idx = 0;
|
||||
droq->refill_idx = 0;
|
||||
droq->refill_count = 0;
|
||||
atomic_set(&droq->pkts_pending, 0);
|
||||
}
|
||||
|
||||
static void
|
||||
octeon_droq_destroy_ring_buffers(struct octeon_device *oct,
|
||||
struct octeon_droq *droq)
|
||||
{
|
||||
u32 i;
|
||||
|
||||
for (i = 0; i < droq->max_count; i++) {
|
||||
if (droq->recv_buf_list[i].buffer) {
|
||||
if (droq->desc_ring) {
|
||||
lio_unmap_ring_info(oct->pci_dev,
|
||||
(u64)droq->
|
||||
desc_ring[i].info_ptr,
|
||||
OCT_DROQ_INFO_SIZE);
|
||||
lio_unmap_ring(oct->pci_dev,
|
||||
(u64)droq->desc_ring[i].
|
||||
buffer_ptr,
|
||||
droq->buffer_size);
|
||||
}
|
||||
recv_buffer_free(droq->recv_buf_list[i].buffer);
|
||||
droq->recv_buf_list[i].buffer = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
octeon_droq_reset_indices(droq);
|
||||
}
|
||||
|
||||
static int
|
||||
octeon_droq_setup_ring_buffers(struct octeon_device *oct,
|
||||
struct octeon_droq *droq)
|
||||
{
|
||||
u32 i;
|
||||
void *buf;
|
||||
struct octeon_droq_desc *desc_ring = droq->desc_ring;
|
||||
|
||||
for (i = 0; i < droq->max_count; i++) {
|
||||
buf = recv_buffer_alloc(oct, droq->q_no, droq->buffer_size);
|
||||
|
||||
if (!buf) {
|
||||
dev_err(&oct->pci_dev->dev, "%s buffer alloc failed\n",
|
||||
__func__);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
droq->recv_buf_list[i].buffer = buf;
|
||||
droq->recv_buf_list[i].data = get_rbd(buf);
|
||||
|
||||
droq->info_list[i].length = 0;
|
||||
|
||||
/* map ring buffers into memory */
|
||||
desc_ring[i].info_ptr = lio_map_ring_info(droq, i);
|
||||
desc_ring[i].buffer_ptr =
|
||||
lio_map_ring(oct->pci_dev,
|
||||
droq->recv_buf_list[i].buffer,
|
||||
droq->buffer_size);
|
||||
}
|
||||
|
||||
octeon_droq_reset_indices(droq);
|
||||
|
||||
octeon_droq_compute_max_packet_bufs(droq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int octeon_delete_droq(struct octeon_device *oct, u32 q_no)
|
||||
{
|
||||
struct octeon_droq *droq = oct->droq[q_no];
|
||||
|
||||
dev_dbg(&oct->pci_dev->dev, "%s[%d]\n", __func__, q_no);
|
||||
|
||||
octeon_droq_destroy_ring_buffers(oct, droq);
|
||||
|
||||
if (droq->recv_buf_list)
|
||||
vfree(droq->recv_buf_list);
|
||||
|
||||
if (droq->info_base_addr)
|
||||
cnnic_free_aligned_dma(oct->pci_dev, droq->info_list,
|
||||
droq->info_alloc_size,
|
||||
droq->info_base_addr,
|
||||
droq->info_list_dma);
|
||||
|
||||
if (droq->desc_ring)
|
||||
lio_dma_free(oct, (droq->max_count * OCT_DROQ_DESC_SIZE),
|
||||
droq->desc_ring, droq->desc_ring_dma);
|
||||
|
||||
memset(droq, 0, OCT_DROQ_SIZE);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int octeon_init_droq(struct octeon_device *oct,
|
||||
u32 q_no,
|
||||
u32 num_descs,
|
||||
u32 desc_size,
|
||||
void *app_ctx)
|
||||
{
|
||||
struct octeon_droq *droq;
|
||||
u32 desc_ring_size = 0, c_num_descs = 0, c_buf_size = 0;
|
||||
u32 c_pkts_per_intr = 0, c_refill_threshold = 0;
|
||||
|
||||
dev_dbg(&oct->pci_dev->dev, "%s[%d]\n", __func__, q_no);
|
||||
|
||||
droq = oct->droq[q_no];
|
||||
memset(droq, 0, OCT_DROQ_SIZE);
|
||||
|
||||
droq->oct_dev = oct;
|
||||
droq->q_no = q_no;
|
||||
if (app_ctx)
|
||||
droq->app_ctx = app_ctx;
|
||||
else
|
||||
droq->app_ctx = (void *)(size_t)q_no;
|
||||
|
||||
c_num_descs = num_descs;
|
||||
c_buf_size = desc_size;
|
||||
if (OCTEON_CN6XXX(oct)) {
|
||||
struct octeon_config *conf6x = CHIP_FIELD(oct, cn6xxx, conf);
|
||||
|
||||
c_pkts_per_intr = (u32)CFG_GET_OQ_PKTS_PER_INTR(conf6x);
|
||||
c_refill_threshold = (u32)CFG_GET_OQ_REFILL_THRESHOLD(conf6x);
|
||||
}
|
||||
|
||||
droq->max_count = c_num_descs;
|
||||
droq->buffer_size = c_buf_size;
|
||||
|
||||
desc_ring_size = droq->max_count * OCT_DROQ_DESC_SIZE;
|
||||
droq->desc_ring = lio_dma_alloc(oct, desc_ring_size,
|
||||
(dma_addr_t *)&droq->desc_ring_dma);
|
||||
|
||||
if (!droq->desc_ring) {
|
||||
dev_err(&oct->pci_dev->dev,
|
||||
"Output queue %d ring alloc failed\n", q_no);
|
||||
return 1;
|
||||
}
|
||||
|
||||
dev_dbg(&oct->pci_dev->dev, "droq[%d]: desc_ring: virt: 0x%p, dma: %lx\n",
|
||||
q_no, droq->desc_ring, droq->desc_ring_dma);
|
||||
dev_dbg(&oct->pci_dev->dev, "droq[%d]: num_desc: %d\n", q_no,
|
||||
droq->max_count);
|
||||
|
||||
droq->info_list =
|
||||
cnnic_alloc_aligned_dma(oct->pci_dev,
|
||||
(droq->max_count * OCT_DROQ_INFO_SIZE),
|
||||
&droq->info_alloc_size,
|
||||
&droq->info_base_addr,
|
||||
&droq->info_list_dma);
|
||||
|
||||
if (!droq->info_list) {
|
||||
dev_err(&oct->pci_dev->dev, "Cannot allocate memory for info list.\n");
|
||||
lio_dma_free(oct, (droq->max_count * OCT_DROQ_DESC_SIZE),
|
||||
droq->desc_ring, droq->desc_ring_dma);
|
||||
return 1;
|
||||
}
|
||||
|
||||
droq->recv_buf_list = (struct octeon_recv_buffer *)
|
||||
vmalloc(droq->max_count *
|
||||
OCT_DROQ_RECVBUF_SIZE);
|
||||
if (!droq->recv_buf_list) {
|
||||
dev_err(&oct->pci_dev->dev, "Output queue recv buf list alloc failed\n");
|
||||
goto init_droq_fail;
|
||||
}
|
||||
|
||||
if (octeon_droq_setup_ring_buffers(oct, droq))
|
||||
goto init_droq_fail;
|
||||
|
||||
droq->pkts_per_intr = c_pkts_per_intr;
|
||||
droq->refill_threshold = c_refill_threshold;
|
||||
|
||||
dev_dbg(&oct->pci_dev->dev, "DROQ INIT: max_empty_descs: %d\n",
|
||||
droq->max_empty_descs);
|
||||
|
||||
spin_lock_init(&droq->lock);
|
||||
|
||||
INIT_LIST_HEAD(&droq->dispatch_list);
|
||||
|
||||
/* For 56xx Pass1, this function won't be called, so no checks. */
|
||||
oct->fn_list.setup_oq_regs(oct, q_no);
|
||||
|
||||
oct->io_qmask.oq |= (1 << q_no);
|
||||
|
||||
return 0;
|
||||
|
||||
init_droq_fail:
|
||||
octeon_delete_droq(oct, q_no);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* octeon_create_recv_info
|
||||
* Parameters:
|
||||
* octeon_dev - pointer to the octeon device structure
|
||||
* droq - droq in which the packet arrived.
|
||||
* buf_cnt - no. of buffers used by the packet.
|
||||
* idx - index in the descriptor for the first buffer in the packet.
|
||||
* Description:
|
||||
* Allocates a recv_info_t and copies the buffer addresses for packet data
|
||||
* into the recv_pkt space which starts at an 8B offset from recv_info_t.
|
||||
* Flags the descriptors for refill later. If available descriptors go
|
||||
* below the threshold to receive a 64K pkt, new buffers are first allocated
|
||||
* before the recv_pkt_t is created.
|
||||
* This routine will be called in interrupt context.
|
||||
* Returns:
|
||||
* Success: Pointer to recv_info_t
|
||||
* Failure: NULL.
|
||||
* Locks:
|
||||
* The droq->lock is held when this routine is called.
|
||||
*/
|
||||
static inline struct octeon_recv_info *octeon_create_recv_info(
|
||||
struct octeon_device *octeon_dev,
|
||||
struct octeon_droq *droq,
|
||||
u32 buf_cnt,
|
||||
u32 idx)
|
||||
{
|
||||
struct octeon_droq_info *info;
|
||||
struct octeon_recv_pkt *recv_pkt;
|
||||
struct octeon_recv_info *recv_info;
|
||||
u32 i, bytes_left;
|
||||
|
||||
info = &droq->info_list[idx];
|
||||
|
||||
recv_info = octeon_alloc_recv_info(sizeof(struct __dispatch));
|
||||
if (!recv_info)
|
||||
return NULL;
|
||||
|
||||
recv_pkt = recv_info->recv_pkt;
|
||||
recv_pkt->rh = info->rh;
|
||||
recv_pkt->length = (u32)info->length;
|
||||
recv_pkt->buffer_count = (u16)buf_cnt;
|
||||
recv_pkt->octeon_id = (u16)octeon_dev->octeon_id;
|
||||
|
||||
i = 0;
|
||||
bytes_left = (u32)info->length;
|
||||
|
||||
while (buf_cnt) {
|
||||
lio_unmap_ring(octeon_dev->pci_dev,
|
||||
(u64)droq->desc_ring[idx].buffer_ptr,
|
||||
droq->buffer_size);
|
||||
|
||||
recv_pkt->buffer_size[i] =
|
||||
(bytes_left >=
|
||||
droq->buffer_size) ? droq->buffer_size : bytes_left;
|
||||
|
||||
recv_pkt->buffer_ptr[i] = droq->recv_buf_list[idx].buffer;
|
||||
droq->recv_buf_list[idx].buffer = NULL;
|
||||
|
||||
INCR_INDEX_BY1(idx, droq->max_count);
|
||||
bytes_left -= droq->buffer_size;
|
||||
i++;
|
||||
buf_cnt--;
|
||||
}
|
||||
|
||||
return recv_info;
|
||||
}
|
||||
|
||||
/* If we were not able to refill all buffers, try to move around
|
||||
* the buffers that were not dispatched.
|
||||
*/
|
||||
static inline u32
|
||||
octeon_droq_refill_pullup_descs(struct octeon_droq *droq,
|
||||
struct octeon_droq_desc *desc_ring)
|
||||
{
|
||||
u32 desc_refilled = 0;
|
||||
|
||||
u32 refill_index = droq->refill_idx;
|
||||
|
||||
while (refill_index != droq->read_idx) {
|
||||
if (droq->recv_buf_list[refill_index].buffer) {
|
||||
droq->recv_buf_list[droq->refill_idx].buffer =
|
||||
droq->recv_buf_list[refill_index].buffer;
|
||||
droq->recv_buf_list[droq->refill_idx].data =
|
||||
droq->recv_buf_list[refill_index].data;
|
||||
desc_ring[droq->refill_idx].buffer_ptr =
|
||||
desc_ring[refill_index].buffer_ptr;
|
||||
droq->recv_buf_list[refill_index].buffer = NULL;
|
||||
desc_ring[refill_index].buffer_ptr = 0;
|
||||
do {
|
||||
INCR_INDEX_BY1(droq->refill_idx,
|
||||
droq->max_count);
|
||||
desc_refilled++;
|
||||
droq->refill_count--;
|
||||
} while (droq->recv_buf_list[droq->refill_idx].
|
||||
buffer);
|
||||
}
|
||||
INCR_INDEX_BY1(refill_index, droq->max_count);
|
||||
} /* while */
|
||||
return desc_refilled;
|
||||
}
|
||||
|
||||
/* octeon_droq_refill
|
||||
* Parameters:
|
||||
* droq - droq in which descriptors require new buffers.
|
||||
* Description:
|
||||
* Called during normal DROQ processing in interrupt mode or by the poll
|
||||
* thread to refill the descriptors from which buffers were dispatched
|
||||
* to upper layers. Attempts to allocate new buffers. If that fails, moves
|
||||
* up buffers (that were not dispatched) to form a contiguous ring.
|
||||
* Returns:
|
||||
* No of descriptors refilled.
|
||||
* Locks:
|
||||
* This routine is called with droq->lock held.
|
||||
*/
|
||||
static u32
|
||||
octeon_droq_refill(struct octeon_device *octeon_dev, struct octeon_droq *droq)
|
||||
{
|
||||
struct octeon_droq_desc *desc_ring;
|
||||
void *buf = NULL;
|
||||
u8 *data;
|
||||
u32 desc_refilled = 0;
|
||||
|
||||
desc_ring = droq->desc_ring;
|
||||
|
||||
while (droq->refill_count && (desc_refilled < droq->max_count)) {
|
||||
/* If a valid buffer exists (happens if there is no dispatch),
|
||||
* reuse
|
||||
* the buffer, else allocate.
|
||||
*/
|
||||
if (!droq->recv_buf_list[droq->refill_idx].buffer) {
|
||||
buf = recv_buffer_alloc(octeon_dev, droq->q_no,
|
||||
droq->buffer_size);
|
||||
/* If a buffer could not be allocated, no point in
|
||||
* continuing
|
||||
*/
|
||||
if (!buf)
|
||||
break;
|
||||
droq->recv_buf_list[droq->refill_idx].buffer =
|
||||
buf;
|
||||
data = get_rbd(buf);
|
||||
} else {
|
||||
data = get_rbd(droq->recv_buf_list
|
||||
[droq->refill_idx].buffer);
|
||||
}
|
||||
|
||||
droq->recv_buf_list[droq->refill_idx].data = data;
|
||||
|
||||
desc_ring[droq->refill_idx].buffer_ptr =
|
||||
lio_map_ring(octeon_dev->pci_dev,
|
||||
droq->recv_buf_list[droq->
|
||||
refill_idx].buffer,
|
||||
droq->buffer_size);
|
||||
|
||||
/* Reset any previous values in the length field. */
|
||||
droq->info_list[droq->refill_idx].length = 0;
|
||||
|
||||
INCR_INDEX_BY1(droq->refill_idx, droq->max_count);
|
||||
desc_refilled++;
|
||||
droq->refill_count--;
|
||||
}
|
||||
|
||||
if (droq->refill_count)
|
||||
desc_refilled +=
|
||||
octeon_droq_refill_pullup_descs(droq, desc_ring);
|
||||
|
||||
/* if droq->refill_count
|
||||
* The refill count would not change in pass two. We only moved buffers
|
||||
* to close the gap in the ring, but we would still have the same no. of
|
||||
* buffers to refill.
|
||||
*/
|
||||
return desc_refilled;
|
||||
}
|
||||
|
||||
static inline u32
|
||||
octeon_droq_get_bufcount(u32 buf_size, u32 total_len)
|
||||
{
|
||||
u32 buf_cnt = 0;
|
||||
|
||||
while (total_len > (buf_size * buf_cnt))
|
||||
buf_cnt++;
|
||||
return buf_cnt;
|
||||
}
|
||||
|
||||
static int
|
||||
octeon_droq_dispatch_pkt(struct octeon_device *oct,
|
||||
struct octeon_droq *droq,
|
||||
union octeon_rh *rh,
|
||||
struct octeon_droq_info *info)
|
||||
{
|
||||
u32 cnt;
|
||||
octeon_dispatch_fn_t disp_fn;
|
||||
struct octeon_recv_info *rinfo;
|
||||
|
||||
cnt = octeon_droq_get_bufcount(droq->buffer_size, (u32)info->length);
|
||||
|
||||
disp_fn = octeon_get_dispatch(oct, (u16)rh->r.opcode,
|
||||
(u16)rh->r.subcode);
|
||||
if (disp_fn) {
|
||||
rinfo = octeon_create_recv_info(oct, droq, cnt, droq->read_idx);
|
||||
if (rinfo) {
|
||||
struct __dispatch *rdisp = rinfo->rsvd;
|
||||
|
||||
rdisp->rinfo = rinfo;
|
||||
rdisp->disp_fn = disp_fn;
|
||||
rinfo->recv_pkt->rh = *rh;
|
||||
list_add_tail(&rdisp->list,
|
||||
&droq->dispatch_list);
|
||||
} else {
|
||||
droq->stats.dropped_nomem++;
|
||||
}
|
||||
} else {
|
||||
dev_err(&oct->pci_dev->dev, "DROQ: No dispatch function\n");
|
||||
droq->stats.dropped_nodispatch++;
|
||||
} /* else (dispatch_fn ... */
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
static inline void octeon_droq_drop_packets(struct octeon_device *oct,
|
||||
struct octeon_droq *droq,
|
||||
u32 cnt)
|
||||
{
|
||||
u32 i = 0, buf_cnt;
|
||||
struct octeon_droq_info *info;
|
||||
|
||||
for (i = 0; i < cnt; i++) {
|
||||
info = &droq->info_list[droq->read_idx];
|
||||
octeon_swap_8B_data((u64 *)info, 2);
|
||||
|
||||
if (info->length) {
|
||||
info->length -= OCT_RH_SIZE;
|
||||
droq->stats.bytes_received += info->length;
|
||||
buf_cnt = octeon_droq_get_bufcount(droq->buffer_size,
|
||||
(u32)info->length);
|
||||
} else {
|
||||
dev_err(&oct->pci_dev->dev, "DROQ: In drop: pkt with len 0\n");
|
||||
buf_cnt = 1;
|
||||
}
|
||||
|
||||
INCR_INDEX(droq->read_idx, buf_cnt, droq->max_count);
|
||||
droq->refill_count += buf_cnt;
|
||||
}
|
||||
}
|
||||
|
||||
static u32
|
||||
octeon_droq_fast_process_packets(struct octeon_device *oct,
|
||||
struct octeon_droq *droq,
|
||||
u32 pkts_to_process)
|
||||
{
|
||||
struct octeon_droq_info *info;
|
||||
union octeon_rh *rh;
|
||||
u32 pkt, total_len = 0, pkt_count;
|
||||
|
||||
pkt_count = pkts_to_process;
|
||||
|
||||
for (pkt = 0; pkt < pkt_count; pkt++) {
|
||||
u32 pkt_len = 0;
|
||||
struct sk_buff *nicbuf = NULL;
|
||||
|
||||
info = &droq->info_list[droq->read_idx];
|
||||
octeon_swap_8B_data((u64 *)info, 2);
|
||||
|
||||
if (!info->length) {
|
||||
dev_err(&oct->pci_dev->dev,
|
||||
"DROQ[%d] idx: %d len:0, pkt_cnt: %d\n",
|
||||
droq->q_no, droq->read_idx, pkt_count);
|
||||
print_hex_dump_bytes("", DUMP_PREFIX_ADDRESS,
|
||||
(u8 *)info,
|
||||
OCT_DROQ_INFO_SIZE);
|
||||
break;
|
||||
}
|
||||
|
||||
/* Len of resp hdr in included in the received data len. */
|
||||
info->length -= OCT_RH_SIZE;
|
||||
rh = &info->rh;
|
||||
|
||||
total_len += (u32)info->length;
|
||||
|
||||
if (OPCODE_SLOW_PATH(rh)) {
|
||||
u32 buf_cnt;
|
||||
|
||||
buf_cnt = octeon_droq_dispatch_pkt(oct, droq, rh, info);
|
||||
INCR_INDEX(droq->read_idx, buf_cnt, droq->max_count);
|
||||
droq->refill_count += buf_cnt;
|
||||
} else {
|
||||
if (info->length <= droq->buffer_size) {
|
||||
lio_unmap_ring(oct->pci_dev,
|
||||
(u64)droq->desc_ring[
|
||||
droq->read_idx].buffer_ptr,
|
||||
droq->buffer_size);
|
||||
pkt_len = (u32)info->length;
|
||||
nicbuf = droq->recv_buf_list[
|
||||
droq->read_idx].buffer;
|
||||
droq->recv_buf_list[droq->read_idx].buffer =
|
||||
NULL;
|
||||
INCR_INDEX_BY1(droq->read_idx, droq->max_count);
|
||||
skb_put(nicbuf, pkt_len);
|
||||
droq->refill_count++;
|
||||
} else {
|
||||
nicbuf = octeon_fast_packet_alloc(oct, droq,
|
||||
droq->q_no,
|
||||
(u32)
|
||||
info->length);
|
||||
pkt_len = 0;
|
||||
/* nicbuf allocation can fail. We'll handle it
|
||||
* inside the loop.
|
||||
*/
|
||||
while (pkt_len < info->length) {
|
||||
int cpy_len;
|
||||
|
||||
cpy_len = ((pkt_len +
|
||||
droq->buffer_size) >
|
||||
info->length) ?
|
||||
((u32)info->length - pkt_len) :
|
||||
droq->buffer_size;
|
||||
|
||||
if (nicbuf) {
|
||||
lio_unmap_ring(oct->pci_dev,
|
||||
(u64)
|
||||
droq->desc_ring
|
||||
[droq->read_idx].
|
||||
buffer_ptr,
|
||||
droq->
|
||||
buffer_size);
|
||||
octeon_fast_packet_next(droq,
|
||||
nicbuf,
|
||||
cpy_len,
|
||||
droq->
|
||||
read_idx
|
||||
);
|
||||
}
|
||||
|
||||
pkt_len += cpy_len;
|
||||
INCR_INDEX_BY1(droq->read_idx,
|
||||
droq->max_count);
|
||||
droq->refill_count++;
|
||||
}
|
||||
}
|
||||
|
||||
if (nicbuf) {
|
||||
if (droq->ops.fptr)
|
||||
droq->ops.fptr(oct->octeon_id,
|
||||
nicbuf, pkt_len,
|
||||
rh, &droq->napi);
|
||||
else
|
||||
recv_buffer_free(nicbuf);
|
||||
}
|
||||
}
|
||||
|
||||
if (droq->refill_count >= droq->refill_threshold) {
|
||||
int desc_refilled = octeon_droq_refill(oct, droq);
|
||||
|
||||
/* Flush the droq descriptor data to memory to be sure
|
||||
* that when we update the credits the data in memory
|
||||
* is accurate.
|
||||
*/
|
||||
wmb();
|
||||
writel((desc_refilled), droq->pkts_credit_reg);
|
||||
/* make sure mmio write completes */
|
||||
mmiowb();
|
||||
}
|
||||
|
||||
} /* for ( each packet )... */
|
||||
|
||||
/* Increment refill_count by the number of buffers processed. */
|
||||
droq->stats.pkts_received += pkt;
|
||||
droq->stats.bytes_received += total_len;
|
||||
|
||||
if ((droq->ops.drop_on_max) && (pkts_to_process - pkt)) {
|
||||
octeon_droq_drop_packets(oct, droq, (pkts_to_process - pkt));
|
||||
|
||||
droq->stats.dropped_toomany += (pkts_to_process - pkt);
|
||||
return pkts_to_process;
|
||||
}
|
||||
|
||||
return pkt;
|
||||
}
|
||||
|
||||
int
|
||||
octeon_droq_process_packets(struct octeon_device *oct,
|
||||
struct octeon_droq *droq,
|
||||
u32 budget)
|
||||
{
|
||||
u32 pkt_count = 0, pkts_processed = 0;
|
||||
struct list_head *tmp, *tmp2;
|
||||
|
||||
pkt_count = atomic_read(&droq->pkts_pending);
|
||||
if (!pkt_count)
|
||||
return 0;
|
||||
|
||||
if (pkt_count > budget)
|
||||
pkt_count = budget;
|
||||
|
||||
/* Grab the lock */
|
||||
spin_lock(&droq->lock);
|
||||
|
||||
pkts_processed = octeon_droq_fast_process_packets(oct, droq, pkt_count);
|
||||
|
||||
atomic_sub(pkts_processed, &droq->pkts_pending);
|
||||
|
||||
/* Release the spin lock */
|
||||
spin_unlock(&droq->lock);
|
||||
|
||||
list_for_each_safe(tmp, tmp2, &droq->dispatch_list) {
|
||||
struct __dispatch *rdisp = (struct __dispatch *)tmp;
|
||||
|
||||
list_del(tmp);
|
||||
rdisp->disp_fn(rdisp->rinfo,
|
||||
octeon_get_dispatch_arg
|
||||
(oct,
|
||||
(u16)rdisp->rinfo->recv_pkt->rh.r.opcode,
|
||||
(u16)rdisp->rinfo->recv_pkt->rh.r.subcode));
|
||||
}
|
||||
|
||||
/* If there are packets pending. schedule tasklet again */
|
||||
if (atomic_read(&droq->pkts_pending))
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Utility function to poll for packets. check_hw_for_packets must be
|
||||
* called before calling this routine.
|
||||
*/
|
||||
|
||||
static int
|
||||
octeon_droq_process_poll_pkts(struct octeon_device *oct,
|
||||
struct octeon_droq *droq, u32 budget)
|
||||
{
|
||||
struct list_head *tmp, *tmp2;
|
||||
u32 pkts_available = 0, pkts_processed = 0;
|
||||
u32 total_pkts_processed = 0;
|
||||
|
||||
if (budget > droq->max_count)
|
||||
budget = droq->max_count;
|
||||
|
||||
spin_lock(&droq->lock);
|
||||
|
||||
while (total_pkts_processed < budget) {
|
||||
pkts_available =
|
||||
CVM_MIN((budget - total_pkts_processed),
|
||||
(u32)(atomic_read(&droq->pkts_pending)));
|
||||
|
||||
if (pkts_available == 0)
|
||||
break;
|
||||
|
||||
pkts_processed =
|
||||
octeon_droq_fast_process_packets(oct, droq,
|
||||
pkts_available);
|
||||
|
||||
atomic_sub(pkts_processed, &droq->pkts_pending);
|
||||
|
||||
total_pkts_processed += pkts_processed;
|
||||
|
||||
octeon_droq_check_hw_for_pkts(oct, droq);
|
||||
}
|
||||
|
||||
spin_unlock(&droq->lock);
|
||||
|
||||
list_for_each_safe(tmp, tmp2, &droq->dispatch_list) {
|
||||
struct __dispatch *rdisp = (struct __dispatch *)tmp;
|
||||
|
||||
list_del(tmp);
|
||||
rdisp->disp_fn(rdisp->rinfo,
|
||||
octeon_get_dispatch_arg
|
||||
(oct,
|
||||
(u16)rdisp->rinfo->recv_pkt->rh.r.opcode,
|
||||
(u16)rdisp->rinfo->recv_pkt->rh.r.subcode));
|
||||
}
|
||||
|
||||
return total_pkts_processed;
|
||||
}
|
||||
|
||||
int
|
||||
octeon_process_droq_poll_cmd(struct octeon_device *oct, u32 q_no, int cmd,
|
||||
u32 arg)
|
||||
{
|
||||
struct octeon_droq *droq;
|
||||
struct octeon_config *oct_cfg = NULL;
|
||||
|
||||
oct_cfg = octeon_get_conf(oct);
|
||||
|
||||
if (!oct_cfg)
|
||||
return -EINVAL;
|
||||
|
||||
if (q_no >= CFG_GET_OQ_MAX_Q(oct_cfg)) {
|
||||
dev_err(&oct->pci_dev->dev, "%s: droq id (%d) exceeds MAX (%d)\n",
|
||||
__func__, q_no, (oct->num_oqs - 1));
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
droq = oct->droq[q_no];
|
||||
|
||||
if (cmd == POLL_EVENT_PROCESS_PKTS)
|
||||
return octeon_droq_process_poll_pkts(oct, droq, arg);
|
||||
|
||||
if (cmd == POLL_EVENT_PENDING_PKTS) {
|
||||
u32 pkt_cnt = atomic_read(&droq->pkts_pending);
|
||||
|
||||
return octeon_droq_process_packets(oct, droq, pkt_cnt);
|
||||
}
|
||||
|
||||
if (cmd == POLL_EVENT_ENABLE_INTR) {
|
||||
u32 value;
|
||||
unsigned long flags;
|
||||
|
||||
/* Enable Pkt Interrupt */
|
||||
switch (oct->chip_id) {
|
||||
case OCTEON_CN66XX:
|
||||
case OCTEON_CN68XX: {
|
||||
struct octeon_cn6xxx *cn6xxx =
|
||||
(struct octeon_cn6xxx *)oct->chip;
|
||||
spin_lock_irqsave
|
||||
(&cn6xxx->lock_for_droq_int_enb_reg, flags);
|
||||
value =
|
||||
octeon_read_csr(oct,
|
||||
CN6XXX_SLI_PKT_TIME_INT_ENB);
|
||||
value |= (1 << q_no);
|
||||
octeon_write_csr(oct,
|
||||
CN6XXX_SLI_PKT_TIME_INT_ENB,
|
||||
value);
|
||||
value =
|
||||
octeon_read_csr(oct,
|
||||
CN6XXX_SLI_PKT_CNT_INT_ENB);
|
||||
value |= (1 << q_no);
|
||||
octeon_write_csr(oct,
|
||||
CN6XXX_SLI_PKT_CNT_INT_ENB,
|
||||
value);
|
||||
|
||||
/* don't bother flushing the enables */
|
||||
|
||||
spin_unlock_irqrestore
|
||||
(&cn6xxx->lock_for_droq_int_enb_reg, flags);
|
||||
return 0;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
dev_err(&oct->pci_dev->dev, "%s Unknown command: %d\n", __func__, cmd);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
int octeon_register_droq_ops(struct octeon_device *oct, u32 q_no,
|
||||
struct octeon_droq_ops *ops)
|
||||
{
|
||||
struct octeon_droq *droq;
|
||||
unsigned long flags;
|
||||
struct octeon_config *oct_cfg = NULL;
|
||||
|
||||
oct_cfg = octeon_get_conf(oct);
|
||||
|
||||
if (!oct_cfg)
|
||||
return -EINVAL;
|
||||
|
||||
if (!(ops)) {
|
||||
dev_err(&oct->pci_dev->dev, "%s: droq_ops pointer is NULL\n",
|
||||
__func__);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (q_no >= CFG_GET_OQ_MAX_Q(oct_cfg)) {
|
||||
dev_err(&oct->pci_dev->dev, "%s: droq id (%d) exceeds MAX (%d)\n",
|
||||
__func__, q_no, (oct->num_oqs - 1));
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
droq = oct->droq[q_no];
|
||||
|
||||
spin_lock_irqsave(&droq->lock, flags);
|
||||
|
||||
memcpy(&droq->ops, ops, sizeof(struct octeon_droq_ops));
|
||||
|
||||
spin_unlock_irqrestore(&droq->lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int octeon_unregister_droq_ops(struct octeon_device *oct, u32 q_no)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct octeon_droq *droq;
|
||||
struct octeon_config *oct_cfg = NULL;
|
||||
|
||||
oct_cfg = octeon_get_conf(oct);
|
||||
|
||||
if (!oct_cfg)
|
||||
return -EINVAL;
|
||||
|
||||
if (q_no >= CFG_GET_OQ_MAX_Q(oct_cfg)) {
|
||||
dev_err(&oct->pci_dev->dev, "%s: droq id (%d) exceeds MAX (%d)\n",
|
||||
__func__, q_no, oct->num_oqs - 1);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
droq = oct->droq[q_no];
|
||||
|
||||
if (!droq) {
|
||||
dev_info(&oct->pci_dev->dev,
|
||||
"Droq id (%d) not available.\n", q_no);
|
||||
return 0;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&droq->lock, flags);
|
||||
|
||||
droq->ops.fptr = NULL;
|
||||
droq->ops.drop_on_max = 0;
|
||||
|
||||
spin_unlock_irqrestore(&droq->lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int octeon_create_droq(struct octeon_device *oct,
|
||||
u32 q_no, u32 num_descs,
|
||||
u32 desc_size, void *app_ctx)
|
||||
{
|
||||
struct octeon_droq *droq;
|
||||
|
||||
if (oct->droq[q_no]) {
|
||||
dev_dbg(&oct->pci_dev->dev, "Droq already in use. Cannot create droq %d again\n",
|
||||
q_no);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* Allocate the DS for the new droq. */
|
||||
droq = vmalloc(sizeof(*droq));
|
||||
if (!droq)
|
||||
goto create_droq_fail;
|
||||
memset(droq, 0, sizeof(struct octeon_droq));
|
||||
|
||||
/*Disable the pkt o/p for this Q */
|
||||
octeon_set_droq_pkt_op(oct, q_no, 0);
|
||||
oct->droq[q_no] = droq;
|
||||
|
||||
/* Initialize the Droq */
|
||||
octeon_init_droq(oct, q_no, num_descs, desc_size, app_ctx);
|
||||
|
||||
oct->num_oqs++;
|
||||
|
||||
dev_dbg(&oct->pci_dev->dev, "%s: Total number of OQ: %d\n", __func__,
|
||||
oct->num_oqs);
|
||||
|
||||
/* Global Droq register settings */
|
||||
|
||||
/* As of now not required, as setting are done for all 32 Droqs at
|
||||
* the same time.
|
||||
*/
|
||||
return 0;
|
||||
|
||||
create_droq_fail:
|
||||
octeon_delete_droq(oct, q_no);
|
||||
return -1;
|
||||
}
|
426
drivers/net/ethernet/cavium/liquidio/octeon_droq.h
Normal file
426
drivers/net/ethernet/cavium/liquidio/octeon_droq.h
Normal file
|
@ -0,0 +1,426 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
|
||||
/*! \file octeon_droq.h
|
||||
* \brief Implementation of Octeon Output queues. "Output" is with
|
||||
* respect to the Octeon device on the NIC. From this driver's point of
|
||||
* view they are ingress queues.
|
||||
*/
|
||||
|
||||
#ifndef __OCTEON_DROQ_H__
|
||||
#define __OCTEON_DROQ_H__
|
||||
|
||||
/* Default number of packets that will be processed in one iteration. */
|
||||
#define MAX_PACKET_BUDGET 0xFFFFFFFF
|
||||
|
||||
/** Octeon descriptor format.
|
||||
* The descriptor ring is made of descriptors which have 2 64-bit values:
|
||||
* -# Physical (bus) address of the data buffer.
|
||||
* -# Physical (bus) address of a octeon_droq_info structure.
|
||||
* The Octeon device DMA's incoming packets and its information at the address
|
||||
* given by these descriptor fields.
|
||||
*/
|
||||
struct octeon_droq_desc {
|
||||
/** The buffer pointer */
|
||||
u64 buffer_ptr;
|
||||
|
||||
/** The Info pointer */
|
||||
u64 info_ptr;
|
||||
};
|
||||
|
||||
#define OCT_DROQ_DESC_SIZE (sizeof(struct octeon_droq_desc))
|
||||
|
||||
/** Information about packet DMA'ed by Octeon.
|
||||
* The format of the information available at Info Pointer after Octeon
|
||||
* has posted a packet. Not all descriptors have valid information. Only
|
||||
* the Info field of the first descriptor for a packet has information
|
||||
* about the packet.
|
||||
*/
|
||||
struct octeon_droq_info {
|
||||
/** The Output Receive Header. */
|
||||
union octeon_rh rh;
|
||||
|
||||
/** The Length of the packet. */
|
||||
u64 length;
|
||||
};
|
||||
|
||||
#define OCT_DROQ_INFO_SIZE (sizeof(struct octeon_droq_info))
|
||||
|
||||
/** Pointer to data buffer.
|
||||
* Driver keeps a pointer to the data buffer that it made available to
|
||||
* the Octeon device. Since the descriptor ring keeps physical (bus)
|
||||
* addresses, this field is required for the driver to keep track of
|
||||
* the virtual address pointers.
|
||||
*/
|
||||
struct octeon_recv_buffer {
|
||||
/** Packet buffer, including metadata. */
|
||||
void *buffer;
|
||||
|
||||
/** Data in the packet buffer. */
|
||||
u8 *data;
|
||||
};
|
||||
|
||||
#define OCT_DROQ_RECVBUF_SIZE (sizeof(struct octeon_recv_buffer))
|
||||
|
||||
/** Output Queue statistics. Each output queue has four stats fields. */
|
||||
struct oct_droq_stats {
|
||||
/** Number of packets received in this queue. */
|
||||
u64 pkts_received;
|
||||
|
||||
/** Bytes received by this queue. */
|
||||
u64 bytes_received;
|
||||
|
||||
/** Packets dropped due to no dispatch function. */
|
||||
u64 dropped_nodispatch;
|
||||
|
||||
/** Packets dropped due to no memory available. */
|
||||
u64 dropped_nomem;
|
||||
|
||||
/** Packets dropped due to large number of pkts to process. */
|
||||
u64 dropped_toomany;
|
||||
|
||||
/** Number of packets sent to stack from this queue. */
|
||||
u64 rx_pkts_received;
|
||||
|
||||
/** Number of Bytes sent to stack from this queue. */
|
||||
u64 rx_bytes_received;
|
||||
|
||||
/** Num of Packets dropped due to receive path failures. */
|
||||
u64 rx_dropped;
|
||||
};
|
||||
|
||||
#define POLL_EVENT_INTR_ARRIVED 1
|
||||
#define POLL_EVENT_PROCESS_PKTS 2
|
||||
#define POLL_EVENT_PENDING_PKTS 3
|
||||
#define POLL_EVENT_ENABLE_INTR 4
|
||||
|
||||
/* The maximum number of buffers that can be dispatched from the
|
||||
* output/dma queue. Set to 64 assuming 1K buffers in DROQ and the fact that
|
||||
* max packet size from DROQ is 64K.
|
||||
*/
|
||||
#define MAX_RECV_BUFS 64
|
||||
|
||||
/** Receive Packet format used when dispatching output queue packets
|
||||
* with non-raw opcodes.
|
||||
* The received packet will be sent to the upper layers using this
|
||||
* structure which is passed as a parameter to the dispatch function
|
||||
*/
|
||||
struct octeon_recv_pkt {
|
||||
/** Number of buffers in this received packet */
|
||||
u16 buffer_count;
|
||||
|
||||
/** Id of the device that is sending the packet up */
|
||||
u16 octeon_id;
|
||||
|
||||
/** Length of data in the packet buffer */
|
||||
u32 length;
|
||||
|
||||
/** The receive header */
|
||||
union octeon_rh rh;
|
||||
|
||||
/** Pointer to the OS-specific packet buffer */
|
||||
void *buffer_ptr[MAX_RECV_BUFS];
|
||||
|
||||
/** Size of the buffers pointed to by ptr's in buffer_ptr */
|
||||
u32 buffer_size[MAX_RECV_BUFS];
|
||||
};
|
||||
|
||||
#define OCT_RECV_PKT_SIZE (sizeof(struct octeon_recv_pkt))
|
||||
|
||||
/** The first parameter of a dispatch function.
|
||||
* For a raw mode opcode, the driver dispatches with the device
|
||||
* pointer in this structure.
|
||||
* For non-raw mode opcode, the driver dispatches the recv_pkt
|
||||
* created to contain the buffers with data received from Octeon.
|
||||
* ---------------------
|
||||
* | *recv_pkt ----|---
|
||||
* |-------------------| |
|
||||
* | 0 or more bytes | |
|
||||
* | reserved by driver| |
|
||||
* |-------------------|<-/
|
||||
* | octeon_recv_pkt |
|
||||
* | |
|
||||
* |___________________|
|
||||
*/
|
||||
struct octeon_recv_info {
|
||||
void *rsvd;
|
||||
struct octeon_recv_pkt *recv_pkt;
|
||||
};
|
||||
|
||||
#define OCT_RECV_INFO_SIZE (sizeof(struct octeon_recv_info))
|
||||
|
||||
/** Allocate a recv_info structure. The recv_pkt pointer in the recv_info
|
||||
* structure is filled in before this call returns.
|
||||
* @param extra_bytes - extra bytes to be allocated at the end of the recv info
|
||||
* structure.
|
||||
* @return - pointer to a newly allocated recv_info structure.
|
||||
*/
|
||||
static inline struct octeon_recv_info *octeon_alloc_recv_info(int extra_bytes)
|
||||
{
|
||||
struct octeon_recv_info *recv_info;
|
||||
u8 *buf;
|
||||
|
||||
buf = kmalloc(OCT_RECV_PKT_SIZE + OCT_RECV_INFO_SIZE +
|
||||
extra_bytes, GFP_ATOMIC);
|
||||
if (!buf)
|
||||
return NULL;
|
||||
|
||||
recv_info = (struct octeon_recv_info *)buf;
|
||||
recv_info->recv_pkt =
|
||||
(struct octeon_recv_pkt *)(buf + OCT_RECV_INFO_SIZE);
|
||||
recv_info->rsvd = NULL;
|
||||
if (extra_bytes)
|
||||
recv_info->rsvd = buf + OCT_RECV_INFO_SIZE + OCT_RECV_PKT_SIZE;
|
||||
|
||||
return recv_info;
|
||||
}
|
||||
|
||||
/** Free a recv_info structure.
|
||||
* @param recv_info - Pointer to receive_info to be freed
|
||||
*/
|
||||
static inline void octeon_free_recv_info(struct octeon_recv_info *recv_info)
|
||||
{
|
||||
kfree(recv_info);
|
||||
}
|
||||
|
||||
typedef int (*octeon_dispatch_fn_t)(struct octeon_recv_info *, void *);
|
||||
|
||||
/** Used by NIC module to register packet handler and to get device
|
||||
* information for each octeon device.
|
||||
*/
|
||||
struct octeon_droq_ops {
|
||||
/** This registered function will be called by the driver with
|
||||
* the octeon id, pointer to buffer from droq and length of
|
||||
* data in the buffer. The receive header gives the port
|
||||
* number to the caller. Function pointer is set by caller.
|
||||
*/
|
||||
void (*fptr)(u32, void *, u32, union octeon_rh *, void *);
|
||||
|
||||
/* This function will be called by the driver for all NAPI related
|
||||
* events. The first param is the octeon id. The second param is the
|
||||
* output queue number. The third is the NAPI event that occurred.
|
||||
*/
|
||||
void (*napi_fn)(void *);
|
||||
|
||||
u32 poll_mode;
|
||||
|
||||
/** Flag indicating if the DROQ handler should drop packets that
|
||||
* it cannot handle in one iteration. Set by caller.
|
||||
*/
|
||||
u32 drop_on_max;
|
||||
};
|
||||
|
||||
/** The Descriptor Ring Output Queue structure.
|
||||
* This structure has all the information required to implement a
|
||||
* Octeon DROQ.
|
||||
*/
|
||||
struct octeon_droq {
|
||||
/** A spinlock to protect access to this ring. */
|
||||
spinlock_t lock;
|
||||
|
||||
u32 q_no;
|
||||
|
||||
struct octeon_droq_ops ops;
|
||||
|
||||
struct octeon_device *oct_dev;
|
||||
|
||||
/** The 8B aligned descriptor ring starts at this address. */
|
||||
struct octeon_droq_desc *desc_ring;
|
||||
|
||||
/** Index in the ring where the driver should read the next packet */
|
||||
u32 read_idx;
|
||||
|
||||
/** Index in the ring where Octeon will write the next packet */
|
||||
u32 write_idx;
|
||||
|
||||
/** Index in the ring where the driver will refill the descriptor's
|
||||
* buffer
|
||||
*/
|
||||
u32 refill_idx;
|
||||
|
||||
/** Packets pending to be processed */
|
||||
atomic_t pkts_pending;
|
||||
|
||||
/** Number of descriptors in this ring. */
|
||||
u32 max_count;
|
||||
|
||||
/** The number of descriptors pending refill. */
|
||||
u32 refill_count;
|
||||
|
||||
u32 pkts_per_intr;
|
||||
u32 refill_threshold;
|
||||
|
||||
/** The max number of descriptors in DROQ without a buffer.
|
||||
* This field is used to keep track of empty space threshold. If the
|
||||
* refill_count reaches this value, the DROQ cannot accept a max-sized
|
||||
* (64K) packet.
|
||||
*/
|
||||
u32 max_empty_descs;
|
||||
|
||||
/** The 8B aligned info ptrs begin from this address. */
|
||||
struct octeon_droq_info *info_list;
|
||||
|
||||
/** The receive buffer list. This list has the virtual addresses of the
|
||||
* buffers.
|
||||
*/
|
||||
struct octeon_recv_buffer *recv_buf_list;
|
||||
|
||||
/** The size of each buffer pointed by the buffer pointer. */
|
||||
u32 buffer_size;
|
||||
|
||||
/** Pointer to the mapped packet credit register.
|
||||
* Host writes number of info/buffer ptrs available to this register
|
||||
*/
|
||||
void __iomem *pkts_credit_reg;
|
||||
|
||||
/** Pointer to the mapped packet sent register.
|
||||
* Octeon writes the number of packets DMA'ed to host memory
|
||||
* in this register.
|
||||
*/
|
||||
void __iomem *pkts_sent_reg;
|
||||
|
||||
struct list_head dispatch_list;
|
||||
|
||||
/** Statistics for this DROQ. */
|
||||
struct oct_droq_stats stats;
|
||||
|
||||
/** DMA mapped address of the DROQ descriptor ring. */
|
||||
size_t desc_ring_dma;
|
||||
|
||||
/** Info ptr list are allocated at this virtual address. */
|
||||
size_t info_base_addr;
|
||||
|
||||
/** DMA mapped address of the info list */
|
||||
size_t info_list_dma;
|
||||
|
||||
/** Allocated size of info list. */
|
||||
u32 info_alloc_size;
|
||||
|
||||
/** application context */
|
||||
void *app_ctx;
|
||||
|
||||
struct napi_struct napi;
|
||||
|
||||
u32 cpu_id;
|
||||
|
||||
struct call_single_data csd;
|
||||
};
|
||||
|
||||
#define OCT_DROQ_SIZE (sizeof(struct octeon_droq))
|
||||
|
||||
/**
|
||||
* Allocates space for the descriptor ring for the droq and sets the
|
||||
* base addr, num desc etc in Octeon registers.
|
||||
*
|
||||
* @param oct_dev - pointer to the octeon device structure
|
||||
* @param q_no - droq no. ranges from 0 - 3.
|
||||
* @param app_ctx - pointer to application context
|
||||
* @return Success: 0 Failure: 1
|
||||
*/
|
||||
int octeon_init_droq(struct octeon_device *oct_dev,
|
||||
u32 q_no,
|
||||
u32 num_descs,
|
||||
u32 desc_size,
|
||||
void *app_ctx);
|
||||
|
||||
/**
|
||||
* Frees the space for descriptor ring for the droq.
|
||||
*
|
||||
* @param oct_dev - pointer to the octeon device structure
|
||||
* @param q_no - droq no. ranges from 0 - 3.
|
||||
* @return: Success: 0 Failure: 1
|
||||
*/
|
||||
int octeon_delete_droq(struct octeon_device *oct_dev, u32 q_no);
|
||||
|
||||
/** Register a change in droq operations. The ops field has a pointer to a
|
||||
* function which will called by the DROQ handler for all packets arriving
|
||||
* on output queues given by q_no irrespective of the type of packet.
|
||||
* The ops field also has a flag which if set tells the DROQ handler to
|
||||
* drop packets if it receives more than what it can process in one
|
||||
* invocation of the handler.
|
||||
* @param oct - octeon device
|
||||
* @param q_no - octeon output queue number (0 <= q_no <= MAX_OCTEON_DROQ-1
|
||||
* @param ops - the droq_ops settings for this queue
|
||||
* @return - 0 on success, -ENODEV or -EINVAL on error.
|
||||
*/
|
||||
int
|
||||
octeon_register_droq_ops(struct octeon_device *oct,
|
||||
u32 q_no,
|
||||
struct octeon_droq_ops *ops);
|
||||
|
||||
/** Resets the function pointer and flag settings made by
|
||||
* octeon_register_droq_ops(). After this routine is called, the DROQ handler
|
||||
* will lookup dispatch function for each arriving packet on the output queue
|
||||
* given by q_no.
|
||||
* @param oct - octeon device
|
||||
* @param q_no - octeon output queue number (0 <= q_no <= MAX_OCTEON_DROQ-1
|
||||
* @return - 0 on success, -ENODEV or -EINVAL on error.
|
||||
*/
|
||||
int octeon_unregister_droq_ops(struct octeon_device *oct, u32 q_no);
|
||||
|
||||
/** Register a dispatch function for a opcode/subcode. The driver will call
|
||||
* this dispatch function when it receives a packet with the given
|
||||
* opcode/subcode in its output queues along with the user specified
|
||||
* argument.
|
||||
* @param oct - the octeon device to register with.
|
||||
* @param opcode - the opcode for which the dispatch will be registered.
|
||||
* @param subcode - the subcode for which the dispatch will be registered
|
||||
* @param fn - the dispatch function.
|
||||
* @param fn_arg - user specified that will be passed along with the
|
||||
* dispatch function by the driver.
|
||||
* @return Success: 0; Failure: 1
|
||||
*/
|
||||
int octeon_register_dispatch_fn(struct octeon_device *oct,
|
||||
u16 opcode,
|
||||
u16 subcode,
|
||||
octeon_dispatch_fn_t fn, void *fn_arg);
|
||||
|
||||
/** Remove registration for an opcode/subcode. This will delete the mapping for
|
||||
* an opcode/subcode. The dispatch function will be unregistered and will no
|
||||
* longer be called if a packet with the opcode/subcode arrives in the driver
|
||||
* output queues.
|
||||
* @param oct - the octeon device to unregister from.
|
||||
* @param opcode - the opcode to be unregistered.
|
||||
* @param subcode - the subcode to be unregistered.
|
||||
*
|
||||
* @return Success: 0; Failure: 1
|
||||
*/
|
||||
int octeon_unregister_dispatch_fn(struct octeon_device *oct,
|
||||
u16 opcode,
|
||||
u16 subcode);
|
||||
|
||||
void octeon_droq_print_stats(void);
|
||||
|
||||
u32 octeon_droq_check_hw_for_pkts(struct octeon_device *oct,
|
||||
struct octeon_droq *droq);
|
||||
|
||||
int octeon_create_droq(struct octeon_device *oct, u32 q_no,
|
||||
u32 num_descs, u32 desc_size, void *app_ctx);
|
||||
|
||||
int octeon_droq_process_packets(struct octeon_device *oct,
|
||||
struct octeon_droq *droq,
|
||||
u32 budget);
|
||||
|
||||
int octeon_process_droq_poll_cmd(struct octeon_device *oct, u32 q_no,
|
||||
int cmd, u32 arg);
|
||||
|
||||
#endif /*__OCTEON_DROQ_H__ */
|
319
drivers/net/ethernet/cavium/liquidio/octeon_iq.h
Normal file
319
drivers/net/ethernet/cavium/liquidio/octeon_iq.h
Normal file
|
@ -0,0 +1,319 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
|
||||
/*! \file octeon_iq.h
|
||||
* \brief Host Driver: Implementation of Octeon input queues. "Input" is
|
||||
* with respect to the Octeon device on the NIC. From this driver's
|
||||
* point of view they are egress queues.
|
||||
*/
|
||||
|
||||
#ifndef __OCTEON_IQ_H__
|
||||
#define __OCTEON_IQ_H__
|
||||
|
||||
#define IQ_STATUS_RUNNING 1
|
||||
|
||||
#define IQ_SEND_OK 0
|
||||
#define IQ_SEND_STOP 1
|
||||
#define IQ_SEND_FAILED -1
|
||||
|
||||
/*------------------------- INSTRUCTION QUEUE --------------------------*/
|
||||
|
||||
/* \cond */
|
||||
|
||||
#define REQTYPE_NONE 0
|
||||
#define REQTYPE_NORESP_NET 1
|
||||
#define REQTYPE_NORESP_NET_SG 2
|
||||
#define REQTYPE_RESP_NET 3
|
||||
#define REQTYPE_RESP_NET_SG 4
|
||||
#define REQTYPE_SOFT_COMMAND 5
|
||||
#define REQTYPE_LAST 5
|
||||
|
||||
struct octeon_request_list {
|
||||
u32 reqtype;
|
||||
void *buf;
|
||||
};
|
||||
|
||||
/* \endcond */
|
||||
|
||||
/** Input Queue statistics. Each input queue has four stats fields. */
|
||||
struct oct_iq_stats {
|
||||
u64 instr_posted; /**< Instructions posted to this queue. */
|
||||
u64 instr_processed; /**< Instructions processed in this queue. */
|
||||
u64 instr_dropped; /**< Instructions that could not be processed */
|
||||
u64 bytes_sent; /**< Bytes sent through this queue. */
|
||||
u64 sgentry_sent;/**< Gather entries sent through this queue. */
|
||||
u64 tx_done;/**< Num of packets sent to network. */
|
||||
u64 tx_iq_busy;/**< Numof times this iq was found to be full. */
|
||||
u64 tx_dropped;/**< Numof pkts dropped dueto xmitpath errors. */
|
||||
u64 tx_tot_bytes;/**< Total count of bytes sento to network. */
|
||||
};
|
||||
|
||||
#define OCT_IQ_STATS_SIZE (sizeof(struct oct_iq_stats))
|
||||
|
||||
/** The instruction (input) queue.
|
||||
* The input queue is used to post raw (instruction) mode data or packet
|
||||
* data to Octeon device from the host. Each input queue (upto 4) for
|
||||
* a Octeon device has one such structure to represent it.
|
||||
*/
|
||||
struct octeon_instr_queue {
|
||||
/** A spinlock to protect access to the input ring. */
|
||||
spinlock_t lock;
|
||||
|
||||
/** Flag that indicates if the queue uses 64 byte commands. */
|
||||
u32 iqcmd_64B:1;
|
||||
|
||||
/** Queue Number. */
|
||||
u32 iq_no:5;
|
||||
|
||||
u32 rsvd:17;
|
||||
|
||||
/* Controls the periodic flushing of iq */
|
||||
u32 do_auto_flush:1;
|
||||
|
||||
u32 status:8;
|
||||
|
||||
/** Maximum no. of instructions in this queue. */
|
||||
u32 max_count;
|
||||
|
||||
/** Index in input ring where the driver should write the next packet */
|
||||
u32 host_write_index;
|
||||
|
||||
/** Index in input ring where Octeon is expected to read the next
|
||||
* packet.
|
||||
*/
|
||||
u32 octeon_read_index;
|
||||
|
||||
/** This index aids in finding the window in the queue where Octeon
|
||||
* has read the commands.
|
||||
*/
|
||||
u32 flush_index;
|
||||
|
||||
/** This field keeps track of the instructions pending in this queue. */
|
||||
atomic_t instr_pending;
|
||||
|
||||
u32 reset_instr_cnt;
|
||||
|
||||
/** Pointer to the Virtual Base addr of the input ring. */
|
||||
u8 *base_addr;
|
||||
|
||||
struct octeon_request_list *request_list;
|
||||
|
||||
/** Octeon doorbell register for the ring. */
|
||||
void __iomem *doorbell_reg;
|
||||
|
||||
/** Octeon instruction count register for this ring. */
|
||||
void __iomem *inst_cnt_reg;
|
||||
|
||||
/** Number of instructions pending to be posted to Octeon. */
|
||||
u32 fill_cnt;
|
||||
|
||||
/** The max. number of instructions that can be held pending by the
|
||||
* driver.
|
||||
*/
|
||||
u32 fill_threshold;
|
||||
|
||||
/** The last time that the doorbell was rung. */
|
||||
u64 last_db_time;
|
||||
|
||||
/** The doorbell timeout. If the doorbell was not rung for this time and
|
||||
* fill_cnt is non-zero, ring the doorbell again.
|
||||
*/
|
||||
u32 db_timeout;
|
||||
|
||||
/** Statistics for this input queue. */
|
||||
struct oct_iq_stats stats;
|
||||
|
||||
/** DMA mapped base address of the input descriptor ring. */
|
||||
u64 base_addr_dma;
|
||||
|
||||
/** Application context */
|
||||
void *app_ctx;
|
||||
};
|
||||
|
||||
/*---------------------- INSTRUCTION FORMAT ----------------------------*/
|
||||
|
||||
/** 32-byte instruction format.
|
||||
* Format of instruction for a 32-byte mode input queue.
|
||||
*/
|
||||
struct octeon_instr_32B {
|
||||
/** Pointer where the input data is available. */
|
||||
u64 dptr;
|
||||
|
||||
/** Instruction Header. */
|
||||
u64 ih;
|
||||
|
||||
/** Pointer where the response for a RAW mode packet will be written
|
||||
* by Octeon.
|
||||
*/
|
||||
u64 rptr;
|
||||
|
||||
/** Input Request Header. Additional info about the input. */
|
||||
u64 irh;
|
||||
|
||||
};
|
||||
|
||||
#define OCT_32B_INSTR_SIZE (sizeof(struct octeon_instr_32B))
|
||||
|
||||
/** 64-byte instruction format.
|
||||
* Format of instruction for a 64-byte mode input queue.
|
||||
*/
|
||||
struct octeon_instr_64B {
|
||||
/** Pointer where the input data is available. */
|
||||
u64 dptr;
|
||||
|
||||
/** Instruction Header. */
|
||||
u64 ih;
|
||||
|
||||
/** Input Request Header. */
|
||||
u64 irh;
|
||||
|
||||
/** opcode/subcode specific parameters */
|
||||
u64 ossp[2];
|
||||
|
||||
/** Return Data Parameters */
|
||||
u64 rdp;
|
||||
|
||||
/** Pointer where the response for a RAW mode packet will be written
|
||||
* by Octeon.
|
||||
*/
|
||||
u64 rptr;
|
||||
|
||||
u64 reserved;
|
||||
|
||||
};
|
||||
|
||||
#define OCT_64B_INSTR_SIZE (sizeof(struct octeon_instr_64B))
|
||||
|
||||
/** The size of each buffer in soft command buffer pool
|
||||
*/
|
||||
#define SOFT_COMMAND_BUFFER_SIZE 1024
|
||||
|
||||
struct octeon_soft_command {
|
||||
/** Soft command buffer info. */
|
||||
struct list_head node;
|
||||
u64 dma_addr;
|
||||
u32 size;
|
||||
|
||||
/** Command and return status */
|
||||
struct octeon_instr_64B cmd;
|
||||
#define COMPLETION_WORD_INIT 0xffffffffffffffffULL
|
||||
u64 *status_word;
|
||||
|
||||
/** Data buffer info */
|
||||
void *virtdptr;
|
||||
u64 dmadptr;
|
||||
u32 datasize;
|
||||
|
||||
/** Return buffer info */
|
||||
void *virtrptr;
|
||||
u64 dmarptr;
|
||||
u32 rdatasize;
|
||||
|
||||
/** Context buffer info */
|
||||
void *ctxptr;
|
||||
u32 ctxsize;
|
||||
|
||||
/** Time out and callback */
|
||||
size_t wait_time;
|
||||
size_t timeout;
|
||||
u32 iq_no;
|
||||
void (*callback)(struct octeon_device *, u32, void *);
|
||||
void *callback_arg;
|
||||
};
|
||||
|
||||
/** Maximum number of buffers to allocate into soft command buffer pool
|
||||
*/
|
||||
#define MAX_SOFT_COMMAND_BUFFERS 16
|
||||
|
||||
/** Head of a soft command buffer pool.
|
||||
*/
|
||||
struct octeon_sc_buffer_pool {
|
||||
/** List structure to add delete pending entries to */
|
||||
struct list_head head;
|
||||
|
||||
/** A lock for this response list */
|
||||
spinlock_t lock;
|
||||
|
||||
atomic_t alloc_buf_count;
|
||||
};
|
||||
|
||||
int octeon_setup_sc_buffer_pool(struct octeon_device *oct);
|
||||
int octeon_free_sc_buffer_pool(struct octeon_device *oct);
|
||||
struct octeon_soft_command *
|
||||
octeon_alloc_soft_command(struct octeon_device *oct,
|
||||
u32 datasize, u32 rdatasize,
|
||||
u32 ctxsize);
|
||||
void octeon_free_soft_command(struct octeon_device *oct,
|
||||
struct octeon_soft_command *sc);
|
||||
|
||||
/**
|
||||
* octeon_init_instr_queue()
|
||||
* @param octeon_dev - pointer to the octeon device structure.
|
||||
* @param iq_no - queue to be initialized (0 <= q_no <= 3).
|
||||
*
|
||||
* Called at driver init time for each input queue. iq_conf has the
|
||||
* configuration parameters for the queue.
|
||||
*
|
||||
* @return Success: 0 Failure: 1
|
||||
*/
|
||||
int octeon_init_instr_queue(struct octeon_device *octeon_dev, u32 iq_no,
|
||||
u32 num_descs);
|
||||
|
||||
/**
|
||||
* octeon_delete_instr_queue()
|
||||
* @param octeon_dev - pointer to the octeon device structure.
|
||||
* @param iq_no - queue to be deleted (0 <= q_no <= 3).
|
||||
*
|
||||
* Called at driver unload time for each input queue. Deletes all
|
||||
* allocated resources for the input queue.
|
||||
*
|
||||
* @return Success: 0 Failure: 1
|
||||
*/
|
||||
int octeon_delete_instr_queue(struct octeon_device *octeon_dev, u32 iq_no);
|
||||
|
||||
int lio_wait_for_instr_fetch(struct octeon_device *oct);
|
||||
|
||||
int
|
||||
octeon_register_reqtype_free_fn(struct octeon_device *oct, int reqtype,
|
||||
void (*fn)(void *));
|
||||
|
||||
int
|
||||
lio_process_iq_request_list(struct octeon_device *oct,
|
||||
struct octeon_instr_queue *iq);
|
||||
|
||||
int octeon_send_command(struct octeon_device *oct, u32 iq_no,
|
||||
u32 force_db, void *cmd, void *buf,
|
||||
u32 datasize, u32 reqtype);
|
||||
|
||||
void octeon_prepare_soft_command(struct octeon_device *oct,
|
||||
struct octeon_soft_command *sc,
|
||||
u8 opcode, u8 subcode,
|
||||
u32 irh_ossp, u64 ossp0,
|
||||
u64 ossp1);
|
||||
|
||||
int octeon_send_soft_command(struct octeon_device *oct,
|
||||
struct octeon_soft_command *sc);
|
||||
|
||||
int octeon_setup_iq(struct octeon_device *oct, u32 iq_no,
|
||||
u32 num_descs, void *app_ctx);
|
||||
|
||||
#endif /* __OCTEON_IQ_H__ */
|
237
drivers/net/ethernet/cavium/liquidio/octeon_main.h
Normal file
237
drivers/net/ethernet/cavium/liquidio/octeon_main.h
Normal file
|
@ -0,0 +1,237 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
|
||||
/*! \file octeon_main.h
|
||||
* \brief Host Driver: This file is included by all host driver source files
|
||||
* to include common definitions.
|
||||
*/
|
||||
|
||||
#ifndef _OCTEON_MAIN_H_
|
||||
#define _OCTEON_MAIN_H_
|
||||
|
||||
#if BITS_PER_LONG == 32
|
||||
#define CVM_CAST64(v) ((long long)(v))
|
||||
#elif BITS_PER_LONG == 64
|
||||
#define CVM_CAST64(v) ((long long)(long)(v))
|
||||
#else
|
||||
#error "Unknown system architecture"
|
||||
#endif
|
||||
|
||||
#define DRV_NAME "LiquidIO"
|
||||
|
||||
/**
|
||||
* \brief determines if a given console has debug enabled.
|
||||
* @param console console to check
|
||||
* @returns 1 = enabled. 0 otherwise
|
||||
*/
|
||||
int octeon_console_debug_enabled(u32 console);
|
||||
|
||||
/* BQL-related functions */
|
||||
void octeon_report_sent_bytes_to_bql(void *buf, int reqtype);
|
||||
void octeon_update_tx_completion_counters(void *buf, int reqtype,
|
||||
unsigned int *pkts_compl,
|
||||
unsigned int *bytes_compl);
|
||||
void octeon_report_tx_completion_to_bql(void *txq, unsigned int pkts_compl,
|
||||
unsigned int bytes_compl);
|
||||
|
||||
/** Swap 8B blocks */
|
||||
static inline void octeon_swap_8B_data(u64 *data, u32 blocks)
|
||||
{
|
||||
while (blocks) {
|
||||
cpu_to_be64s(data);
|
||||
blocks--;
|
||||
data++;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* \brief unmaps a PCI BAR
|
||||
* @param oct Pointer to Octeon device
|
||||
* @param baridx bar index
|
||||
*/
|
||||
static inline void octeon_unmap_pci_barx(struct octeon_device *oct, int baridx)
|
||||
{
|
||||
dev_dbg(&oct->pci_dev->dev, "Freeing PCI mapped regions for Bar%d\n",
|
||||
baridx);
|
||||
|
||||
if (oct->mmio[baridx].done)
|
||||
iounmap(oct->mmio[baridx].hw_addr);
|
||||
|
||||
if (oct->mmio[baridx].start)
|
||||
pci_release_region(oct->pci_dev, baridx * 2);
|
||||
}
|
||||
|
||||
/**
|
||||
* \brief maps a PCI BAR
|
||||
* @param oct Pointer to Octeon device
|
||||
* @param baridx bar index
|
||||
* @param max_map_len maximum length of mapped memory
|
||||
*/
|
||||
static inline int octeon_map_pci_barx(struct octeon_device *oct,
|
||||
int baridx, int max_map_len)
|
||||
{
|
||||
u32 mapped_len = 0;
|
||||
|
||||
if (pci_request_region(oct->pci_dev, baridx * 2, DRV_NAME)) {
|
||||
dev_err(&oct->pci_dev->dev, "pci_request_region failed for bar %d\n",
|
||||
baridx);
|
||||
return 1;
|
||||
}
|
||||
|
||||
oct->mmio[baridx].start = pci_resource_start(oct->pci_dev, baridx * 2);
|
||||
oct->mmio[baridx].len = pci_resource_len(oct->pci_dev, baridx * 2);
|
||||
|
||||
mapped_len = oct->mmio[baridx].len;
|
||||
if (!mapped_len)
|
||||
return 1;
|
||||
|
||||
if (max_map_len && (mapped_len > max_map_len))
|
||||
mapped_len = max_map_len;
|
||||
|
||||
oct->mmio[baridx].hw_addr =
|
||||
ioremap(oct->mmio[baridx].start, mapped_len);
|
||||
oct->mmio[baridx].mapped_len = mapped_len;
|
||||
|
||||
dev_dbg(&oct->pci_dev->dev, "BAR%d start: 0x%llx mapped %u of %u bytes\n",
|
||||
baridx, oct->mmio[baridx].start, mapped_len,
|
||||
oct->mmio[baridx].len);
|
||||
|
||||
if (!oct->mmio[baridx].hw_addr) {
|
||||
dev_err(&oct->pci_dev->dev, "error ioremap for bar %d\n",
|
||||
baridx);
|
||||
return 1;
|
||||
}
|
||||
oct->mmio[baridx].done = 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void *
|
||||
cnnic_alloc_aligned_dma(struct pci_dev *pci_dev,
|
||||
u32 size,
|
||||
u32 *alloc_size,
|
||||
size_t *orig_ptr,
|
||||
size_t *dma_addr __attribute__((unused)))
|
||||
{
|
||||
int retries = 0;
|
||||
void *ptr = NULL;
|
||||
|
||||
#define OCTEON_MAX_ALLOC_RETRIES 1
|
||||
do {
|
||||
ptr =
|
||||
(void *)__get_free_pages(GFP_KERNEL,
|
||||
get_order(size));
|
||||
if ((unsigned long)ptr & 0x07) {
|
||||
free_pages((unsigned long)ptr, get_order(size));
|
||||
ptr = NULL;
|
||||
/* Increment the size required if the first
|
||||
* attempt failed.
|
||||
*/
|
||||
if (!retries)
|
||||
size += 7;
|
||||
}
|
||||
retries++;
|
||||
} while ((retries <= OCTEON_MAX_ALLOC_RETRIES) && !ptr);
|
||||
|
||||
*alloc_size = size;
|
||||
*orig_ptr = (unsigned long)ptr;
|
||||
if ((unsigned long)ptr & 0x07)
|
||||
ptr = (void *)(((unsigned long)ptr + 7) & ~(7UL));
|
||||
return ptr;
|
||||
}
|
||||
|
||||
#define cnnic_free_aligned_dma(pci_dev, ptr, size, orig_ptr, dma_addr) \
|
||||
free_pages(orig_ptr, get_order(size))
|
||||
|
||||
static inline void
|
||||
sleep_cond(wait_queue_head_t *wait_queue, int *condition)
|
||||
{
|
||||
wait_queue_t we;
|
||||
|
||||
init_waitqueue_entry(&we, current);
|
||||
add_wait_queue(wait_queue, &we);
|
||||
while (!(ACCESS_ONCE(*condition))) {
|
||||
set_current_state(TASK_INTERRUPTIBLE);
|
||||
if (signal_pending(current))
|
||||
goto out;
|
||||
schedule();
|
||||
}
|
||||
out:
|
||||
set_current_state(TASK_RUNNING);
|
||||
remove_wait_queue(wait_queue, &we);
|
||||
}
|
||||
|
||||
static inline void
|
||||
sleep_atomic_cond(wait_queue_head_t *waitq, atomic_t *pcond)
|
||||
{
|
||||
wait_queue_t we;
|
||||
|
||||
init_waitqueue_entry(&we, current);
|
||||
add_wait_queue(waitq, &we);
|
||||
while (!atomic_read(pcond)) {
|
||||
set_current_state(TASK_INTERRUPTIBLE);
|
||||
if (signal_pending(current))
|
||||
goto out;
|
||||
schedule();
|
||||
}
|
||||
out:
|
||||
set_current_state(TASK_RUNNING);
|
||||
remove_wait_queue(waitq, &we);
|
||||
}
|
||||
|
||||
/* Gives up the CPU for a timeout period.
|
||||
* Check that the condition is not true before we go to sleep for a
|
||||
* timeout period.
|
||||
*/
|
||||
static inline void
|
||||
sleep_timeout_cond(wait_queue_head_t *wait_queue,
|
||||
int *condition,
|
||||
int timeout)
|
||||
{
|
||||
wait_queue_t we;
|
||||
|
||||
init_waitqueue_entry(&we, current);
|
||||
add_wait_queue(wait_queue, &we);
|
||||
set_current_state(TASK_INTERRUPTIBLE);
|
||||
if (!(*condition))
|
||||
schedule_timeout(timeout);
|
||||
set_current_state(TASK_RUNNING);
|
||||
remove_wait_queue(wait_queue, &we);
|
||||
}
|
||||
|
||||
#ifndef ROUNDUP4
|
||||
#define ROUNDUP4(val) (((val) + 3) & 0xfffffffc)
|
||||
#endif
|
||||
|
||||
#ifndef ROUNDUP8
|
||||
#define ROUNDUP8(val) (((val) + 7) & 0xfffffff8)
|
||||
#endif
|
||||
|
||||
#ifndef ROUNDUP16
|
||||
#define ROUNDUP16(val) (((val) + 15) & 0xfffffff0)
|
||||
#endif
|
||||
|
||||
#ifndef ROUNDUP128
|
||||
#define ROUNDUP128(val) (((val) + 127) & 0xffffff80)
|
||||
#endif
|
||||
|
||||
#endif /* _OCTEON_MAIN_H_ */
|
199
drivers/net/ethernet/cavium/liquidio/octeon_mem_ops.c
Normal file
199
drivers/net/ethernet/cavium/liquidio/octeon_mem_ops.c
Normal file
|
@ -0,0 +1,199 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
#include <linux/version.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include "octeon_config.h"
|
||||
#include "liquidio_common.h"
|
||||
#include "octeon_droq.h"
|
||||
#include "octeon_iq.h"
|
||||
#include "response_manager.h"
|
||||
#include "octeon_device.h"
|
||||
#include "octeon_nic.h"
|
||||
#include "octeon_main.h"
|
||||
#include "octeon_network.h"
|
||||
#include "cn66xx_regs.h"
|
||||
#include "cn66xx_device.h"
|
||||
#include "cn68xx_regs.h"
|
||||
#include "cn68xx_device.h"
|
||||
#include "liquidio_image.h"
|
||||
#include "octeon_mem_ops.h"
|
||||
|
||||
#define MEMOPS_IDX MAX_BAR1_MAP_INDEX
|
||||
|
||||
static inline void
|
||||
octeon_toggle_bar1_swapmode(struct octeon_device *oct __attribute__((unused)),
|
||||
u32 idx __attribute__((unused)))
|
||||
{
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
u32 mask;
|
||||
|
||||
mask = oct->fn_list.bar1_idx_read(oct, idx);
|
||||
mask = (mask & 0x2) ? (mask & ~2) : (mask | 2);
|
||||
oct->fn_list.bar1_idx_write(oct, idx, mask);
|
||||
#endif
|
||||
}
|
||||
|
||||
static void
|
||||
octeon_pci_fastwrite(struct octeon_device *oct, u8 __iomem *mapped_addr,
|
||||
u8 *hostbuf, u32 len)
|
||||
{
|
||||
while ((len) && ((unsigned long)mapped_addr) & 7) {
|
||||
writeb(*(hostbuf++), mapped_addr++);
|
||||
len--;
|
||||
}
|
||||
|
||||
octeon_toggle_bar1_swapmode(oct, MEMOPS_IDX);
|
||||
|
||||
while (len >= 8) {
|
||||
writeq(*((u64 *)hostbuf), mapped_addr);
|
||||
mapped_addr += 8;
|
||||
hostbuf += 8;
|
||||
len -= 8;
|
||||
}
|
||||
|
||||
octeon_toggle_bar1_swapmode(oct, MEMOPS_IDX);
|
||||
|
||||
while (len--)
|
||||
writeb(*(hostbuf++), mapped_addr++);
|
||||
}
|
||||
|
||||
static void
|
||||
octeon_pci_fastread(struct octeon_device *oct, u8 __iomem *mapped_addr,
|
||||
u8 *hostbuf, u32 len)
|
||||
{
|
||||
while ((len) && ((unsigned long)mapped_addr) & 7) {
|
||||
*(hostbuf++) = readb(mapped_addr++);
|
||||
len--;
|
||||
}
|
||||
|
||||
octeon_toggle_bar1_swapmode(oct, MEMOPS_IDX);
|
||||
|
||||
while (len >= 8) {
|
||||
*((u64 *)hostbuf) = readq(mapped_addr);
|
||||
mapped_addr += 8;
|
||||
hostbuf += 8;
|
||||
len -= 8;
|
||||
}
|
||||
|
||||
octeon_toggle_bar1_swapmode(oct, MEMOPS_IDX);
|
||||
|
||||
while (len--)
|
||||
*(hostbuf++) = readb(mapped_addr++);
|
||||
}
|
||||
|
||||
/* Core mem read/write with temporary bar1 settings. */
|
||||
/* op = 1 to read, op = 0 to write. */
|
||||
static void
|
||||
__octeon_pci_rw_core_mem(struct octeon_device *oct, u64 addr,
|
||||
u8 *hostbuf, u32 len, u32 op)
|
||||
{
|
||||
u32 copy_len = 0, index_reg_val = 0;
|
||||
unsigned long flags;
|
||||
u8 __iomem *mapped_addr;
|
||||
|
||||
spin_lock_irqsave(&oct->mem_access_lock, flags);
|
||||
|
||||
/* Save the original index reg value. */
|
||||
index_reg_val = oct->fn_list.bar1_idx_read(oct, MEMOPS_IDX);
|
||||
do {
|
||||
oct->fn_list.bar1_idx_setup(oct, addr, MEMOPS_IDX, 1);
|
||||
mapped_addr = oct->mmio[1].hw_addr
|
||||
+ (MEMOPS_IDX << 22) + (addr & 0x3fffff);
|
||||
|
||||
/* If operation crosses a 4MB boundary, split the transfer
|
||||
* at the 4MB
|
||||
* boundary.
|
||||
*/
|
||||
if (((addr + len - 1) & ~(0x3fffff)) != (addr & ~(0x3fffff))) {
|
||||
copy_len = (u32)(((addr & ~(0x3fffff)) +
|
||||
(MEMOPS_IDX << 22)) - addr);
|
||||
} else {
|
||||
copy_len = len;
|
||||
}
|
||||
|
||||
if (op) { /* read from core */
|
||||
octeon_pci_fastread(oct, mapped_addr, hostbuf,
|
||||
copy_len);
|
||||
} else {
|
||||
octeon_pci_fastwrite(oct, mapped_addr, hostbuf,
|
||||
copy_len);
|
||||
}
|
||||
|
||||
len -= copy_len;
|
||||
addr += copy_len;
|
||||
hostbuf += copy_len;
|
||||
|
||||
} while (len);
|
||||
|
||||
oct->fn_list.bar1_idx_write(oct, MEMOPS_IDX, index_reg_val);
|
||||
|
||||
spin_unlock_irqrestore(&oct->mem_access_lock, flags);
|
||||
}
|
||||
|
||||
void
|
||||
octeon_pci_read_core_mem(struct octeon_device *oct,
|
||||
u64 coreaddr,
|
||||
u8 *buf,
|
||||
u32 len)
|
||||
{
|
||||
__octeon_pci_rw_core_mem(oct, coreaddr, buf, len, 1);
|
||||
}
|
||||
|
||||
void
|
||||
octeon_pci_write_core_mem(struct octeon_device *oct,
|
||||
u64 coreaddr,
|
||||
u8 *buf,
|
||||
u32 len)
|
||||
{
|
||||
__octeon_pci_rw_core_mem(oct, coreaddr, buf, len, 0);
|
||||
}
|
||||
|
||||
u64 octeon_read_device_mem64(struct octeon_device *oct, u64 coreaddr)
|
||||
{
|
||||
u64 ret;
|
||||
|
||||
__octeon_pci_rw_core_mem(oct, coreaddr, (u8 *)&ret, 8, 1);
|
||||
|
||||
return be64_to_cpu(ret);
|
||||
}
|
||||
|
||||
u32 octeon_read_device_mem32(struct octeon_device *oct, u64 coreaddr)
|
||||
{
|
||||
u32 ret;
|
||||
|
||||
__octeon_pci_rw_core_mem(oct, coreaddr, (u8 *)&ret, 4, 1);
|
||||
|
||||
return be32_to_cpu(ret);
|
||||
}
|
||||
|
||||
void octeon_write_device_mem32(struct octeon_device *oct, u64 coreaddr,
|
||||
u32 val)
|
||||
{
|
||||
u32 t = cpu_to_be32(val);
|
||||
|
||||
__octeon_pci_rw_core_mem(oct, coreaddr, (u8 *)&t, 4, 0);
|
||||
}
|
75
drivers/net/ethernet/cavium/liquidio/octeon_mem_ops.h
Normal file
75
drivers/net/ethernet/cavium/liquidio/octeon_mem_ops.h
Normal file
|
@ -0,0 +1,75 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
|
||||
/*! \file octeon_mem_ops.h
|
||||
* \brief Host Driver: Routines used to read/write Octeon memory.
|
||||
*/
|
||||
|
||||
#ifndef __OCTEON_MEM_OPS_H__
|
||||
#define __OCTEON_MEM_OPS_H__
|
||||
|
||||
/** Read a 64-bit value from a BAR1 mapped core memory address.
|
||||
* @param oct - pointer to the octeon device.
|
||||
* @param core_addr - the address to read from.
|
||||
*
|
||||
* The range_idx gives the BAR1 index register for the range of address
|
||||
* in which core_addr is mapped.
|
||||
*
|
||||
* @return 64-bit value read from Core memory
|
||||
*/
|
||||
u64 octeon_read_device_mem64(struct octeon_device *oct, u64 core_addr);
|
||||
|
||||
/** Read a 32-bit value from a BAR1 mapped core memory address.
|
||||
* @param oct - pointer to the octeon device.
|
||||
* @param core_addr - the address to read from.
|
||||
*
|
||||
* @return 32-bit value read from Core memory
|
||||
*/
|
||||
u32 octeon_read_device_mem32(struct octeon_device *oct, u64 core_addr);
|
||||
|
||||
/** Write a 32-bit value to a BAR1 mapped core memory address.
|
||||
* @param oct - pointer to the octeon device.
|
||||
* @param core_addr - the address to write to.
|
||||
* @param val - 32-bit value to write.
|
||||
*/
|
||||
void
|
||||
octeon_write_device_mem32(struct octeon_device *oct,
|
||||
u64 core_addr,
|
||||
u32 val);
|
||||
|
||||
/** Read multiple bytes from Octeon memory.
|
||||
*/
|
||||
void
|
||||
octeon_pci_read_core_mem(struct octeon_device *oct,
|
||||
u64 coreaddr,
|
||||
u8 *buf,
|
||||
u32 len);
|
||||
|
||||
/** Write multiple bytes into Octeon memory.
|
||||
*/
|
||||
void
|
||||
octeon_pci_write_core_mem(struct octeon_device *oct,
|
||||
u64 coreaddr,
|
||||
u8 *buf,
|
||||
u32 len);
|
||||
|
||||
#endif
|
224
drivers/net/ethernet/cavium/liquidio/octeon_network.h
Normal file
224
drivers/net/ethernet/cavium/liquidio/octeon_network.h
Normal file
|
@ -0,0 +1,224 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
|
||||
/*! \file octeon_network.h
|
||||
* \brief Host NIC Driver: Structure and Macro definitions used by NIC Module.
|
||||
*/
|
||||
|
||||
#ifndef __OCTEON_NETWORK_H__
|
||||
#define __OCTEON_NETWORK_H__
|
||||
#include <linux/version.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/ptp_clock_kernel.h>
|
||||
|
||||
/** LiquidIO per-interface network private data */
|
||||
struct lio {
|
||||
/** State of the interface. Rx/Tx happens only in the RUNNING state. */
|
||||
atomic_t ifstate;
|
||||
|
||||
/** Octeon Interface index number. This device will be represented as
|
||||
* oct<ifidx> in the system.
|
||||
*/
|
||||
int ifidx;
|
||||
|
||||
/** Octeon Input queue to use to transmit for this network interface. */
|
||||
int txq;
|
||||
|
||||
/** Octeon Output queue from which pkts arrive
|
||||
* for this network interface.
|
||||
*/
|
||||
int rxq;
|
||||
|
||||
/** Guards the glist */
|
||||
spinlock_t lock;
|
||||
|
||||
/** Linked list of gather components */
|
||||
struct list_head glist;
|
||||
|
||||
/** Pointer to the NIC properties for the Octeon device this network
|
||||
* interface is associated with.
|
||||
*/
|
||||
struct octdev_props *octprops;
|
||||
|
||||
/** Pointer to the octeon device structure. */
|
||||
struct octeon_device *oct_dev;
|
||||
|
||||
struct net_device *netdev;
|
||||
|
||||
/** Link information sent by the core application for this interface. */
|
||||
struct oct_link_info linfo;
|
||||
|
||||
/** Size of Tx queue for this octeon device. */
|
||||
u32 tx_qsize;
|
||||
|
||||
/** Size of Rx queue for this octeon device. */
|
||||
u32 rx_qsize;
|
||||
|
||||
/** Size of MTU this octeon device. */
|
||||
u32 mtu;
|
||||
|
||||
/** msg level flag per interface. */
|
||||
u32 msg_enable;
|
||||
|
||||
/** Copy of Interface capabilities: TSO, TSO6, LRO, Chescksums . */
|
||||
u64 dev_capability;
|
||||
|
||||
/** Copy of beacaon reg in phy */
|
||||
u32 phy_beacon_val;
|
||||
|
||||
/** Copy of ctrl reg in phy */
|
||||
u32 led_ctrl_val;
|
||||
|
||||
/* PTP clock information */
|
||||
struct ptp_clock_info ptp_info;
|
||||
struct ptp_clock *ptp_clock;
|
||||
s64 ptp_adjust;
|
||||
|
||||
/* for atomic access to Octeon PTP reg and data struct */
|
||||
spinlock_t ptp_lock;
|
||||
|
||||
/* Interface info */
|
||||
u32 intf_open;
|
||||
|
||||
/* work queue for txq status */
|
||||
struct cavium_wq txq_status_wq;
|
||||
|
||||
};
|
||||
|
||||
#define LIO_SIZE (sizeof(struct lio))
|
||||
#define GET_LIO(netdev) ((struct lio *)netdev_priv(netdev))
|
||||
|
||||
/**
|
||||
* \brief Enable or disable feature
|
||||
* @param netdev pointer to network device
|
||||
* @param cmd Command that just requires acknowledgment
|
||||
*/
|
||||
int liquidio_set_feature(struct net_device *netdev, int cmd);
|
||||
|
||||
/**
|
||||
* \brief Link control command completion callback
|
||||
* @param nctrl_ptr pointer to control packet structure
|
||||
*
|
||||
* This routine is called by the callback function when a ctrl pkt sent to
|
||||
* core app completes. The nctrl_ptr contains a copy of the command type
|
||||
* and data sent to the core app. This routine is only called if the ctrl
|
||||
* pkt was sent successfully to the core app.
|
||||
*/
|
||||
void liquidio_link_ctrl_cmd_completion(void *nctrl_ptr);
|
||||
|
||||
/**
|
||||
* \brief Register ethtool operations
|
||||
* @param netdev pointer to network device
|
||||
*/
|
||||
void liquidio_set_ethtool_ops(struct net_device *netdev);
|
||||
|
||||
static inline void
|
||||
*recv_buffer_alloc(struct octeon_device *oct __attribute__((unused)),
|
||||
u32 q_no __attribute__((unused)), u32 size)
|
||||
{
|
||||
#define SKB_ADJ_MASK 0x3F
|
||||
#define SKB_ADJ (SKB_ADJ_MASK + 1)
|
||||
|
||||
struct sk_buff *skb = dev_alloc_skb(size + SKB_ADJ);
|
||||
|
||||
if ((unsigned long)skb->data & SKB_ADJ_MASK) {
|
||||
u32 r = SKB_ADJ - ((unsigned long)skb->data & SKB_ADJ_MASK);
|
||||
|
||||
skb_reserve(skb, r);
|
||||
}
|
||||
|
||||
return (void *)skb;
|
||||
}
|
||||
|
||||
static inline void recv_buffer_free(void *buffer)
|
||||
{
|
||||
dev_kfree_skb_any((struct sk_buff *)buffer);
|
||||
}
|
||||
|
||||
#define lio_dma_alloc(oct, size, dma_addr) \
|
||||
dma_alloc_coherent(&oct->pci_dev->dev, size, dma_addr, GFP_KERNEL)
|
||||
#define lio_dma_free(oct, size, virt_addr, dma_addr) \
|
||||
dma_free_coherent(&oct->pci_dev->dev, size, virt_addr, dma_addr)
|
||||
|
||||
#define get_rbd(ptr) (((struct sk_buff *)(ptr))->data)
|
||||
|
||||
static inline u64
|
||||
lio_map_ring_info(struct octeon_droq *droq, u32 i)
|
||||
{
|
||||
dma_addr_t dma_addr;
|
||||
struct octeon_device *oct = droq->oct_dev;
|
||||
|
||||
dma_addr = dma_map_single(&oct->pci_dev->dev, &droq->info_list[i],
|
||||
OCT_DROQ_INFO_SIZE, DMA_FROM_DEVICE);
|
||||
|
||||
BUG_ON(dma_mapping_error(&oct->pci_dev->dev, dma_addr));
|
||||
|
||||
return (u64)dma_addr;
|
||||
}
|
||||
|
||||
static inline void
|
||||
lio_unmap_ring_info(struct pci_dev *pci_dev,
|
||||
u64 info_ptr, u32 size)
|
||||
{
|
||||
dma_unmap_single(&pci_dev->dev, info_ptr, size, DMA_FROM_DEVICE);
|
||||
}
|
||||
|
||||
static inline u64
|
||||
lio_map_ring(struct pci_dev *pci_dev,
|
||||
void *buf, u32 size)
|
||||
{
|
||||
dma_addr_t dma_addr;
|
||||
|
||||
dma_addr = dma_map_single(&pci_dev->dev, get_rbd(buf), size,
|
||||
DMA_FROM_DEVICE);
|
||||
|
||||
BUG_ON(dma_mapping_error(&pci_dev->dev, dma_addr));
|
||||
|
||||
return (u64)dma_addr;
|
||||
}
|
||||
|
||||
static inline void
|
||||
lio_unmap_ring(struct pci_dev *pci_dev,
|
||||
u64 buf_ptr, u32 size)
|
||||
{
|
||||
dma_unmap_single(&pci_dev->dev,
|
||||
buf_ptr, size,
|
||||
DMA_FROM_DEVICE);
|
||||
}
|
||||
|
||||
static inline void *octeon_fast_packet_alloc(struct octeon_device *oct,
|
||||
struct octeon_droq *droq,
|
||||
u32 q_no, u32 size)
|
||||
{
|
||||
return recv_buffer_alloc(oct, q_no, size);
|
||||
}
|
||||
|
||||
static inline void octeon_fast_packet_next(struct octeon_droq *droq,
|
||||
struct sk_buff *nicbuf,
|
||||
int copy_len,
|
||||
int idx)
|
||||
{
|
||||
memcpy(skb_put(nicbuf, copy_len),
|
||||
get_rbd(droq->recv_buf_list[idx].buffer), copy_len);
|
||||
}
|
||||
|
||||
#endif
|
189
drivers/net/ethernet/cavium/liquidio/octeon_nic.c
Normal file
189
drivers/net/ethernet/cavium/liquidio/octeon_nic.c
Normal file
|
@ -0,0 +1,189 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
#include <linux/version.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include "octeon_config.h"
|
||||
#include "liquidio_common.h"
|
||||
#include "octeon_droq.h"
|
||||
#include "octeon_iq.h"
|
||||
#include "response_manager.h"
|
||||
#include "octeon_device.h"
|
||||
#include "octeon_nic.h"
|
||||
#include "octeon_main.h"
|
||||
#include "octeon_network.h"
|
||||
#include "cn66xx_regs.h"
|
||||
#include "cn66xx_device.h"
|
||||
#include "cn68xx_regs.h"
|
||||
#include "cn68xx_device.h"
|
||||
#include "liquidio_image.h"
|
||||
#include "octeon_mem_ops.h"
|
||||
|
||||
void *
|
||||
octeon_alloc_soft_command_resp(struct octeon_device *oct,
|
||||
struct octeon_instr_64B *cmd,
|
||||
size_t rdatasize)
|
||||
{
|
||||
struct octeon_soft_command *sc;
|
||||
struct octeon_instr_ih *ih;
|
||||
struct octeon_instr_irh *irh;
|
||||
struct octeon_instr_rdp *rdp;
|
||||
|
||||
sc = (struct octeon_soft_command *)
|
||||
octeon_alloc_soft_command(oct, 0, rdatasize, 0);
|
||||
|
||||
if (!sc)
|
||||
return NULL;
|
||||
|
||||
/* Copy existing command structure into the soft command */
|
||||
memcpy(&sc->cmd, cmd, sizeof(struct octeon_instr_64B));
|
||||
|
||||
/* Add in the response related fields. Opcode and Param are already
|
||||
* there.
|
||||
*/
|
||||
ih = (struct octeon_instr_ih *)&sc->cmd.ih;
|
||||
ih->fsz = 40; /* irh + ossp[0] + ossp[1] + rdp + rptr = 40 bytes */
|
||||
|
||||
irh = (struct octeon_instr_irh *)&sc->cmd.irh;
|
||||
irh->rflag = 1; /* a response is required */
|
||||
irh->len = 4; /* means four 64-bit words immediately follow irh */
|
||||
|
||||
rdp = (struct octeon_instr_rdp *)&sc->cmd.rdp;
|
||||
rdp->pcie_port = oct->pcie_port;
|
||||
rdp->rlen = rdatasize;
|
||||
|
||||
*sc->status_word = COMPLETION_WORD_INIT;
|
||||
|
||||
sc->wait_time = 1000;
|
||||
sc->timeout = jiffies + sc->wait_time;
|
||||
|
||||
return sc;
|
||||
}
|
||||
|
||||
int octnet_send_nic_data_pkt(struct octeon_device *oct,
|
||||
struct octnic_data_pkt *ndata,
|
||||
u32 xmit_more)
|
||||
{
|
||||
int ring_doorbell;
|
||||
|
||||
ring_doorbell = !xmit_more;
|
||||
|
||||
return octeon_send_command(oct, ndata->q_no, ring_doorbell, &ndata->cmd,
|
||||
ndata->buf, ndata->datasize,
|
||||
ndata->reqtype);
|
||||
}
|
||||
|
||||
static void octnet_link_ctrl_callback(struct octeon_device *oct,
|
||||
u32 status,
|
||||
void *sc_ptr)
|
||||
{
|
||||
struct octeon_soft_command *sc = (struct octeon_soft_command *)sc_ptr;
|
||||
struct octnic_ctrl_pkt *nctrl;
|
||||
|
||||
nctrl = (struct octnic_ctrl_pkt *)sc->ctxptr;
|
||||
|
||||
/* Call the callback function if status is OK.
|
||||
* Status is OK only if a response was expected and core returned
|
||||
* success.
|
||||
* If no response was expected, status is OK if the command was posted
|
||||
* successfully.
|
||||
*/
|
||||
if (!status && nctrl->cb_fn)
|
||||
nctrl->cb_fn(nctrl);
|
||||
|
||||
octeon_free_soft_command(oct, sc);
|
||||
}
|
||||
|
||||
static inline struct octeon_soft_command
|
||||
*octnic_alloc_ctrl_pkt_sc(struct octeon_device *oct,
|
||||
struct octnic_ctrl_pkt *nctrl,
|
||||
struct octnic_ctrl_params nparams)
|
||||
{
|
||||
struct octeon_soft_command *sc = NULL;
|
||||
u8 *data;
|
||||
size_t rdatasize;
|
||||
u32 uddsize = 0, datasize = 0;
|
||||
|
||||
uddsize = (u32)(nctrl->ncmd.s.more * 8);
|
||||
|
||||
datasize = OCTNET_CMD_SIZE + uddsize;
|
||||
rdatasize = (nctrl->wait_time) ? 16 : 0;
|
||||
|
||||
sc = (struct octeon_soft_command *)
|
||||
octeon_alloc_soft_command(oct, datasize, rdatasize,
|
||||
sizeof(struct octnic_ctrl_pkt));
|
||||
|
||||
if (!sc)
|
||||
return NULL;
|
||||
|
||||
memcpy(sc->ctxptr, nctrl, sizeof(struct octnic_ctrl_pkt));
|
||||
|
||||
data = (u8 *)sc->virtdptr;
|
||||
|
||||
memcpy(data, &nctrl->ncmd, OCTNET_CMD_SIZE);
|
||||
|
||||
octeon_swap_8B_data((u64 *)data, (OCTNET_CMD_SIZE >> 3));
|
||||
|
||||
if (uddsize) {
|
||||
/* Endian-Swap for UDD should have been done by caller. */
|
||||
memcpy(data + OCTNET_CMD_SIZE, nctrl->udd, uddsize);
|
||||
}
|
||||
|
||||
octeon_prepare_soft_command(oct, sc, OPCODE_NIC, OPCODE_NIC_CMD,
|
||||
0, 0, 0);
|
||||
|
||||
sc->callback = octnet_link_ctrl_callback;
|
||||
sc->callback_arg = sc;
|
||||
sc->wait_time = nctrl->wait_time;
|
||||
|
||||
return sc;
|
||||
}
|
||||
|
||||
int
|
||||
octnet_send_nic_ctrl_pkt(struct octeon_device *oct,
|
||||
struct octnic_ctrl_pkt *nctrl,
|
||||
struct octnic_ctrl_params nparams)
|
||||
{
|
||||
int retval;
|
||||
struct octeon_soft_command *sc = NULL;
|
||||
|
||||
sc = octnic_alloc_ctrl_pkt_sc(oct, nctrl, nparams);
|
||||
if (!sc) {
|
||||
dev_err(&oct->pci_dev->dev, "%s soft command alloc failed\n",
|
||||
__func__);
|
||||
return -1;
|
||||
}
|
||||
|
||||
retval = octeon_send_soft_command(oct, sc);
|
||||
if (retval) {
|
||||
octeon_free_soft_command(oct, sc);
|
||||
dev_err(&oct->pci_dev->dev, "%s soft command send failed status: %x\n",
|
||||
__func__, retval);
|
||||
return -1;
|
||||
}
|
||||
|
||||
return retval;
|
||||
}
|
227
drivers/net/ethernet/cavium/liquidio/octeon_nic.h
Normal file
227
drivers/net/ethernet/cavium/liquidio/octeon_nic.h
Normal file
|
@ -0,0 +1,227 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
|
||||
/*! \file octeon_nic.h
|
||||
* \brief Host NIC Driver: Routine to send network data &
|
||||
* control packet to Octeon.
|
||||
*/
|
||||
|
||||
#ifndef __OCTEON_NIC_H__
|
||||
#define __OCTEON_NIC_H__
|
||||
|
||||
/* Maximum number of 8-byte words can be sent in a NIC control message.
|
||||
*/
|
||||
#define MAX_NCTRL_UDD 32
|
||||
|
||||
typedef void (*octnic_ctrl_pkt_cb_fn_t) (void *);
|
||||
|
||||
/* Structure of control information passed by the NIC module to the OSI
|
||||
* layer when sending control commands to Octeon device software.
|
||||
*/
|
||||
struct octnic_ctrl_pkt {
|
||||
/** Command to be passed to the Octeon device software. */
|
||||
union octnet_cmd ncmd;
|
||||
|
||||
/** Send buffer */
|
||||
void *data;
|
||||
u64 dmadata;
|
||||
|
||||
/** Response buffer */
|
||||
void *rdata;
|
||||
u64 dmardata;
|
||||
|
||||
/** Additional data that may be needed by some commands. */
|
||||
u64 udd[MAX_NCTRL_UDD];
|
||||
|
||||
/** Time to wait for Octeon software to respond to this control command.
|
||||
* If wait_time is 0, OSI assumes no response is expected.
|
||||
*/
|
||||
size_t wait_time;
|
||||
|
||||
/** The network device that issued the control command. */
|
||||
u64 netpndev;
|
||||
|
||||
/** Callback function called when the command has been fetched */
|
||||
octnic_ctrl_pkt_cb_fn_t cb_fn;
|
||||
};
|
||||
|
||||
#define MAX_UDD_SIZE(nctrl) (sizeof(nctrl->udd))
|
||||
|
||||
/** Structure of data information passed by the NIC module to the OSI
|
||||
* layer when forwarding data to Octeon device software.
|
||||
*/
|
||||
struct octnic_data_pkt {
|
||||
/** Pointer to information maintained by NIC module for this packet. The
|
||||
* OSI layer passes this as-is to the driver.
|
||||
*/
|
||||
void *buf;
|
||||
|
||||
/** Type of buffer passed in "buf" above. */
|
||||
u32 reqtype;
|
||||
|
||||
/** Total data bytes to be transferred in this command. */
|
||||
u32 datasize;
|
||||
|
||||
/** Command to be passed to the Octeon device software. */
|
||||
struct octeon_instr_64B cmd;
|
||||
|
||||
/** Input queue to use to send this command. */
|
||||
u32 q_no;
|
||||
|
||||
};
|
||||
|
||||
/** Structure passed by NIC module to OSI layer to prepare a command to send
|
||||
* network data to Octeon.
|
||||
*/
|
||||
union octnic_cmd_setup {
|
||||
struct {
|
||||
u32 ifidx:8;
|
||||
u32 cksum_offset:7;
|
||||
u32 gather:1;
|
||||
u32 timestamp:1;
|
||||
u32 ipv4opts_ipv6exthdr:2;
|
||||
u32 ip_csum:1;
|
||||
u32 tnl_csum:1;
|
||||
|
||||
u32 rsvd:11;
|
||||
union {
|
||||
u32 datasize;
|
||||
u32 gatherptrs;
|
||||
} u;
|
||||
} s;
|
||||
|
||||
u64 u64;
|
||||
|
||||
};
|
||||
|
||||
struct octnic_ctrl_params {
|
||||
u32 resp_order;
|
||||
};
|
||||
|
||||
static inline int octnet_iq_is_full(struct octeon_device *oct, u32 q_no)
|
||||
{
|
||||
return ((u32)atomic_read(&oct->instr_queue[q_no]->instr_pending)
|
||||
>= (oct->instr_queue[q_no]->max_count - 2));
|
||||
}
|
||||
|
||||
/** Utility function to prepare a 64B NIC instruction based on a setup command
|
||||
* @param cmd - pointer to instruction to be filled in.
|
||||
* @param setup - pointer to the setup structure
|
||||
* @param q_no - which queue for back pressure
|
||||
*
|
||||
* Assumes the cmd instruction is pre-allocated, but no fields are filled in.
|
||||
*/
|
||||
static inline void
|
||||
octnet_prepare_pci_cmd(struct octeon_instr_64B *cmd,
|
||||
union octnic_cmd_setup *setup, u32 tag)
|
||||
{
|
||||
struct octeon_instr_ih *ih;
|
||||
struct octeon_instr_irh *irh;
|
||||
union octnic_packet_params packet_params;
|
||||
|
||||
memset(cmd, 0, sizeof(struct octeon_instr_64B));
|
||||
|
||||
ih = (struct octeon_instr_ih *)&cmd->ih;
|
||||
|
||||
/* assume that rflag is cleared so therefore front data will only have
|
||||
* irh and ossp[1] and ossp[2] for a total of 24 bytes
|
||||
*/
|
||||
ih->fsz = 24;
|
||||
|
||||
ih->tagtype = ORDERED_TAG;
|
||||
ih->grp = DEFAULT_POW_GRP;
|
||||
|
||||
if (tag)
|
||||
ih->tag = tag;
|
||||
else
|
||||
ih->tag = LIO_DATA(setup->s.ifidx);
|
||||
|
||||
ih->raw = 1;
|
||||
ih->qos = (setup->s.ifidx & 3) + 4; /* map qos based on interface */
|
||||
|
||||
if (!setup->s.gather) {
|
||||
ih->dlengsz = setup->s.u.datasize;
|
||||
} else {
|
||||
ih->gather = 1;
|
||||
ih->dlengsz = setup->s.u.gatherptrs;
|
||||
}
|
||||
|
||||
irh = (struct octeon_instr_irh *)&cmd->irh;
|
||||
|
||||
irh->opcode = OPCODE_NIC;
|
||||
irh->subcode = OPCODE_NIC_NW_DATA;
|
||||
|
||||
packet_params.u32 = 0;
|
||||
|
||||
if (setup->s.cksum_offset) {
|
||||
packet_params.s.csoffset = setup->s.cksum_offset;
|
||||
packet_params.s.ipv4opts_ipv6exthdr =
|
||||
setup->s.ipv4opts_ipv6exthdr;
|
||||
}
|
||||
|
||||
packet_params.s.ip_csum = setup->s.ip_csum;
|
||||
packet_params.s.tnl_csum = setup->s.tnl_csum;
|
||||
packet_params.s.ifidx = setup->s.ifidx;
|
||||
packet_params.s.tsflag = setup->s.timestamp;
|
||||
|
||||
irh->ossp = packet_params.u32;
|
||||
}
|
||||
|
||||
/** Allocate and a soft command with space for a response immediately following
|
||||
* the commnad.
|
||||
* @param oct - octeon device pointer
|
||||
* @param cmd - pointer to the command structure, pre-filled for everything
|
||||
* except the response.
|
||||
* @param rdatasize - size in bytes of the response.
|
||||
*
|
||||
* @returns pointer to allocated buffer with command copied into it, and
|
||||
* response space immediately following.
|
||||
*/
|
||||
void *
|
||||
octeon_alloc_soft_command_resp(struct octeon_device *oct,
|
||||
struct octeon_instr_64B *cmd,
|
||||
size_t rdatasize);
|
||||
|
||||
/** Send a NIC data packet to the device
|
||||
* @param oct - octeon device pointer
|
||||
* @param ndata - control structure with queueing, and buffer information
|
||||
*
|
||||
* @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
|
||||
* queue should be stopped, and IQ_SEND_OK if it sent okay.
|
||||
*/
|
||||
int octnet_send_nic_data_pkt(struct octeon_device *oct,
|
||||
struct octnic_data_pkt *ndata, u32 xmit_more);
|
||||
|
||||
/** Send a NIC control packet to the device
|
||||
* @param oct - octeon device pointer
|
||||
* @param nctrl - control structure with command, timout, and callback info
|
||||
* @param nparams - response control structure
|
||||
*
|
||||
* @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
|
||||
* queue should be stopped, and IQ_SEND_OK if it sent okay.
|
||||
*/
|
||||
int
|
||||
octnet_send_nic_ctrl_pkt(struct octeon_device *oct,
|
||||
struct octnic_ctrl_pkt *nctrl,
|
||||
struct octnic_ctrl_params nparams);
|
||||
|
||||
#endif
|
764
drivers/net/ethernet/cavium/liquidio/request_manager.c
Normal file
764
drivers/net/ethernet/cavium/liquidio/request_manager.c
Normal file
|
@ -0,0 +1,764 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
#include <linux/version.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include "octeon_config.h"
|
||||
#include "liquidio_common.h"
|
||||
#include "octeon_droq.h"
|
||||
#include "octeon_iq.h"
|
||||
#include "response_manager.h"
|
||||
#include "octeon_device.h"
|
||||
#include "octeon_nic.h"
|
||||
#include "octeon_main.h"
|
||||
#include "octeon_network.h"
|
||||
#include "cn66xx_regs.h"
|
||||
#include "cn66xx_device.h"
|
||||
#include "cn68xx_regs.h"
|
||||
#include "cn68xx_device.h"
|
||||
#include "liquidio_image.h"
|
||||
|
||||
#define INCR_INSTRQUEUE_PKT_COUNT(octeon_dev_ptr, iq_no, field, count) \
|
||||
(octeon_dev_ptr->instr_queue[iq_no]->stats.field += count)
|
||||
|
||||
struct iq_post_status {
|
||||
int status;
|
||||
int index;
|
||||
};
|
||||
|
||||
static void check_db_timeout(struct work_struct *work);
|
||||
static void __check_db_timeout(struct octeon_device *oct, unsigned long iq_no);
|
||||
|
||||
static void (*reqtype_free_fn[MAX_OCTEON_DEVICES][REQTYPE_LAST + 1]) (void *);
|
||||
|
||||
static inline int IQ_INSTR_MODE_64B(struct octeon_device *oct, int iq_no)
|
||||
{
|
||||
struct octeon_instr_queue *iq =
|
||||
(struct octeon_instr_queue *)oct->instr_queue[iq_no];
|
||||
return iq->iqcmd_64B;
|
||||
}
|
||||
|
||||
#define IQ_INSTR_MODE_32B(oct, iq_no) (!IQ_INSTR_MODE_64B(oct, iq_no))
|
||||
|
||||
/* Define this to return the request status comaptible to old code */
|
||||
/*#define OCTEON_USE_OLD_REQ_STATUS*/
|
||||
|
||||
/* Return 0 on success, 1 on failure */
|
||||
int octeon_init_instr_queue(struct octeon_device *oct,
|
||||
u32 iq_no, u32 num_descs)
|
||||
{
|
||||
struct octeon_instr_queue *iq;
|
||||
struct octeon_iq_config *conf = NULL;
|
||||
u32 q_size;
|
||||
struct cavium_wq *db_wq;
|
||||
|
||||
if (OCTEON_CN6XXX(oct))
|
||||
conf = &(CFG_GET_IQ_CFG(CHIP_FIELD(oct, cn6xxx, conf)));
|
||||
|
||||
if (!conf) {
|
||||
dev_err(&oct->pci_dev->dev, "Unsupported Chip %x\n",
|
||||
oct->chip_id);
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (num_descs & (num_descs - 1)) {
|
||||
dev_err(&oct->pci_dev->dev,
|
||||
"Number of descriptors for instr queue %d not in power of 2.\n",
|
||||
iq_no);
|
||||
return 1;
|
||||
}
|
||||
|
||||
q_size = (u32)conf->instr_type * num_descs;
|
||||
|
||||
iq = oct->instr_queue[iq_no];
|
||||
|
||||
iq->base_addr = lio_dma_alloc(oct, q_size,
|
||||
(dma_addr_t *)&iq->base_addr_dma);
|
||||
if (!iq->base_addr) {
|
||||
dev_err(&oct->pci_dev->dev, "Cannot allocate memory for instr queue %d\n",
|
||||
iq_no);
|
||||
return 1;
|
||||
}
|
||||
|
||||
iq->max_count = num_descs;
|
||||
|
||||
/* Initialize a list to holds requests that have been posted to Octeon
|
||||
* but has yet to be fetched by octeon
|
||||
*/
|
||||
iq->request_list = vmalloc(sizeof(*iq->request_list) * num_descs);
|
||||
if (!iq->request_list) {
|
||||
lio_dma_free(oct, q_size, iq->base_addr, iq->base_addr_dma);
|
||||
dev_err(&oct->pci_dev->dev, "Alloc failed for IQ[%d] nr free list\n",
|
||||
iq_no);
|
||||
return 1;
|
||||
}
|
||||
|
||||
memset(iq->request_list, 0, sizeof(*iq->request_list) * num_descs);
|
||||
|
||||
dev_dbg(&oct->pci_dev->dev, "IQ[%d]: base: %p basedma: %llx count: %d\n",
|
||||
iq_no, iq->base_addr, iq->base_addr_dma, iq->max_count);
|
||||
|
||||
iq->iq_no = iq_no;
|
||||
iq->fill_threshold = (u32)conf->db_min;
|
||||
iq->fill_cnt = 0;
|
||||
iq->host_write_index = 0;
|
||||
iq->octeon_read_index = 0;
|
||||
iq->flush_index = 0;
|
||||
iq->last_db_time = 0;
|
||||
iq->do_auto_flush = 1;
|
||||
iq->db_timeout = (u32)conf->db_timeout;
|
||||
atomic_set(&iq->instr_pending, 0);
|
||||
|
||||
/* Initialize the spinlock for this instruction queue */
|
||||
spin_lock_init(&iq->lock);
|
||||
|
||||
oct->io_qmask.iq |= (1 << iq_no);
|
||||
|
||||
/* Set the 32B/64B mode for each input queue */
|
||||
oct->io_qmask.iq64B |= ((conf->instr_type == 64) << iq_no);
|
||||
iq->iqcmd_64B = (conf->instr_type == 64);
|
||||
|
||||
oct->fn_list.setup_iq_regs(oct, iq_no);
|
||||
|
||||
oct->check_db_wq[iq_no].wq = create_workqueue("check_iq_db");
|
||||
if (!oct->check_db_wq[iq_no].wq) {
|
||||
lio_dma_free(oct, q_size, iq->base_addr, iq->base_addr_dma);
|
||||
dev_err(&oct->pci_dev->dev, "check db wq create failed for iq %d\n",
|
||||
iq_no);
|
||||
return 1;
|
||||
}
|
||||
|
||||
db_wq = &oct->check_db_wq[iq_no];
|
||||
|
||||
INIT_DELAYED_WORK(&db_wq->wk.work, check_db_timeout);
|
||||
db_wq->wk.ctxptr = oct;
|
||||
db_wq->wk.ctxul = iq_no;
|
||||
queue_delayed_work(db_wq->wq, &db_wq->wk.work, msecs_to_jiffies(1));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int octeon_delete_instr_queue(struct octeon_device *oct, u32 iq_no)
|
||||
{
|
||||
u64 desc_size = 0, q_size;
|
||||
struct octeon_instr_queue *iq = oct->instr_queue[iq_no];
|
||||
|
||||
cancel_delayed_work_sync(&oct->check_db_wq[iq_no].wk.work);
|
||||
flush_workqueue(oct->check_db_wq[iq_no].wq);
|
||||
destroy_workqueue(oct->check_db_wq[iq_no].wq);
|
||||
|
||||
if (OCTEON_CN6XXX(oct))
|
||||
desc_size =
|
||||
CFG_GET_IQ_INSTR_TYPE(CHIP_FIELD(oct, cn6xxx, conf));
|
||||
|
||||
if (iq->request_list)
|
||||
vfree(iq->request_list);
|
||||
|
||||
if (iq->base_addr) {
|
||||
q_size = iq->max_count * desc_size;
|
||||
lio_dma_free(oct, (u32)q_size, iq->base_addr,
|
||||
iq->base_addr_dma);
|
||||
return 0;
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* Return 0 on success, 1 on failure */
|
||||
int octeon_setup_iq(struct octeon_device *oct,
|
||||
u32 iq_no,
|
||||
u32 num_descs,
|
||||
void *app_ctx)
|
||||
{
|
||||
if (oct->instr_queue[iq_no]) {
|
||||
dev_dbg(&oct->pci_dev->dev, "IQ is in use. Cannot create the IQ: %d again\n",
|
||||
iq_no);
|
||||
oct->instr_queue[iq_no]->app_ctx = app_ctx;
|
||||
return 0;
|
||||
}
|
||||
oct->instr_queue[iq_no] =
|
||||
vmalloc(sizeof(struct octeon_instr_queue));
|
||||
if (!oct->instr_queue[iq_no])
|
||||
return 1;
|
||||
|
||||
memset(oct->instr_queue[iq_no], 0,
|
||||
sizeof(struct octeon_instr_queue));
|
||||
|
||||
oct->instr_queue[iq_no]->app_ctx = app_ctx;
|
||||
if (octeon_init_instr_queue(oct, iq_no, num_descs)) {
|
||||
vfree(oct->instr_queue[iq_no]);
|
||||
oct->instr_queue[iq_no] = NULL;
|
||||
return 1;
|
||||
}
|
||||
|
||||
oct->num_iqs++;
|
||||
oct->fn_list.enable_io_queues(oct);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int lio_wait_for_instr_fetch(struct octeon_device *oct)
|
||||
{
|
||||
int i, retry = 1000, pending, instr_cnt = 0;
|
||||
|
||||
do {
|
||||
instr_cnt = 0;
|
||||
|
||||
/*for (i = 0; i < oct->num_iqs; i++) {*/
|
||||
for (i = 0; i < MAX_OCTEON_INSTR_QUEUES; i++) {
|
||||
if (!(oct->io_qmask.iq & (1UL << i)))
|
||||
continue;
|
||||
pending =
|
||||
atomic_read(&oct->
|
||||
instr_queue[i]->instr_pending);
|
||||
if (pending)
|
||||
__check_db_timeout(oct, i);
|
||||
instr_cnt += pending;
|
||||
}
|
||||
|
||||
if (instr_cnt == 0)
|
||||
break;
|
||||
|
||||
schedule_timeout_uninterruptible(1);
|
||||
|
||||
} while (retry-- && instr_cnt);
|
||||
|
||||
return instr_cnt;
|
||||
}
|
||||
|
||||
static inline void
|
||||
ring_doorbell(struct octeon_device *oct, struct octeon_instr_queue *iq)
|
||||
{
|
||||
if (atomic_read(&oct->status) == OCT_DEV_RUNNING) {
|
||||
writel(iq->fill_cnt, iq->doorbell_reg);
|
||||
/* make sure doorbell write goes through */
|
||||
mmiowb();
|
||||
iq->fill_cnt = 0;
|
||||
iq->last_db_time = jiffies;
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
static inline void __copy_cmd_into_iq(struct octeon_instr_queue *iq,
|
||||
u8 *cmd)
|
||||
{
|
||||
u8 *iqptr, cmdsize;
|
||||
|
||||
cmdsize = ((iq->iqcmd_64B) ? 64 : 32);
|
||||
iqptr = iq->base_addr + (cmdsize * iq->host_write_index);
|
||||
|
||||
memcpy(iqptr, cmd, cmdsize);
|
||||
}
|
||||
|
||||
static inline int
|
||||
__post_command(struct octeon_device *octeon_dev __attribute__((unused)),
|
||||
struct octeon_instr_queue *iq,
|
||||
u32 force_db __attribute__((unused)), u8 *cmd)
|
||||
{
|
||||
u32 index = -1;
|
||||
|
||||
/* This ensures that the read index does not wrap around to the same
|
||||
* position if queue gets full before Octeon could fetch any instr.
|
||||
*/
|
||||
if (atomic_read(&iq->instr_pending) >= (s32)(iq->max_count - 1))
|
||||
return -1;
|
||||
|
||||
__copy_cmd_into_iq(iq, cmd);
|
||||
|
||||
/* "index" is returned, host_write_index is modified. */
|
||||
index = iq->host_write_index;
|
||||
INCR_INDEX_BY1(iq->host_write_index, iq->max_count);
|
||||
iq->fill_cnt++;
|
||||
|
||||
/* Flush the command into memory. We need to be sure the data is in
|
||||
* memory before indicating that the instruction is pending.
|
||||
*/
|
||||
wmb();
|
||||
|
||||
atomic_inc(&iq->instr_pending);
|
||||
|
||||
return index;
|
||||
}
|
||||
|
||||
static inline struct iq_post_status
|
||||
__post_command2(struct octeon_device *octeon_dev __attribute__((unused)),
|
||||
struct octeon_instr_queue *iq,
|
||||
u32 force_db __attribute__((unused)), u8 *cmd)
|
||||
{
|
||||
struct iq_post_status st;
|
||||
|
||||
st.status = IQ_SEND_OK;
|
||||
|
||||
/* This ensures that the read index does not wrap around to the same
|
||||
* position if queue gets full before Octeon could fetch any instr.
|
||||
*/
|
||||
if (atomic_read(&iq->instr_pending) >= (s32)(iq->max_count - 1)) {
|
||||
st.status = IQ_SEND_FAILED;
|
||||
st.index = -1;
|
||||
return st;
|
||||
}
|
||||
|
||||
if (atomic_read(&iq->instr_pending) >= (s32)(iq->max_count - 2))
|
||||
st.status = IQ_SEND_STOP;
|
||||
|
||||
__copy_cmd_into_iq(iq, cmd);
|
||||
|
||||
/* "index" is returned, host_write_index is modified. */
|
||||
st.index = iq->host_write_index;
|
||||
INCR_INDEX_BY1(iq->host_write_index, iq->max_count);
|
||||
iq->fill_cnt++;
|
||||
|
||||
/* Flush the command into memory. We need to be sure the data is in
|
||||
* memory before indicating that the instruction is pending.
|
||||
*/
|
||||
wmb();
|
||||
|
||||
atomic_inc(&iq->instr_pending);
|
||||
|
||||
return st;
|
||||
}
|
||||
|
||||
int
|
||||
octeon_register_reqtype_free_fn(struct octeon_device *oct, int reqtype,
|
||||
void (*fn)(void *))
|
||||
{
|
||||
if (reqtype > REQTYPE_LAST) {
|
||||
dev_err(&oct->pci_dev->dev, "%s: Invalid reqtype: %d\n",
|
||||
__func__, reqtype);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
reqtype_free_fn[oct->octeon_id][reqtype] = fn;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void
|
||||
__add_to_request_list(struct octeon_instr_queue *iq,
|
||||
int idx, void *buf, int reqtype)
|
||||
{
|
||||
iq->request_list[idx].buf = buf;
|
||||
iq->request_list[idx].reqtype = reqtype;
|
||||
}
|
||||
|
||||
int
|
||||
lio_process_iq_request_list(struct octeon_device *oct,
|
||||
struct octeon_instr_queue *iq)
|
||||
{
|
||||
int reqtype;
|
||||
void *buf;
|
||||
u32 old = iq->flush_index;
|
||||
u32 inst_count = 0;
|
||||
unsigned pkts_compl = 0, bytes_compl = 0;
|
||||
struct octeon_soft_command *sc;
|
||||
struct octeon_instr_irh *irh;
|
||||
|
||||
while (old != iq->octeon_read_index) {
|
||||
reqtype = iq->request_list[old].reqtype;
|
||||
buf = iq->request_list[old].buf;
|
||||
|
||||
if (reqtype == REQTYPE_NONE)
|
||||
goto skip_this;
|
||||
|
||||
octeon_update_tx_completion_counters(buf, reqtype, &pkts_compl,
|
||||
&bytes_compl);
|
||||
|
||||
switch (reqtype) {
|
||||
case REQTYPE_NORESP_NET:
|
||||
case REQTYPE_NORESP_NET_SG:
|
||||
case REQTYPE_RESP_NET_SG:
|
||||
reqtype_free_fn[oct->octeon_id][reqtype](buf);
|
||||
break;
|
||||
case REQTYPE_RESP_NET:
|
||||
case REQTYPE_SOFT_COMMAND:
|
||||
sc = buf;
|
||||
|
||||
irh = (struct octeon_instr_irh *)&sc->cmd.irh;
|
||||
if (irh->rflag) {
|
||||
/* We're expecting a response from Octeon.
|
||||
* It's up to lio_process_ordered_list() to
|
||||
* process sc. Add sc to the ordered soft
|
||||
* command response list because we expect
|
||||
* a response from Octeon.
|
||||
*/
|
||||
spin_lock_bh(&oct->response_list
|
||||
[OCTEON_ORDERED_SC_LIST].lock);
|
||||
atomic_inc(&oct->response_list
|
||||
[OCTEON_ORDERED_SC_LIST].
|
||||
pending_req_count);
|
||||
list_add_tail(&sc->node, &oct->response_list
|
||||
[OCTEON_ORDERED_SC_LIST].head);
|
||||
spin_unlock_bh(&oct->response_list
|
||||
[OCTEON_ORDERED_SC_LIST].lock);
|
||||
} else {
|
||||
if (sc->callback) {
|
||||
sc->callback(oct, OCTEON_REQUEST_DONE,
|
||||
sc->callback_arg);
|
||||
}
|
||||
}
|
||||
break;
|
||||
default:
|
||||
dev_err(&oct->pci_dev->dev,
|
||||
"%s Unknown reqtype: %d buf: %p at idx %d\n",
|
||||
__func__, reqtype, buf, old);
|
||||
}
|
||||
|
||||
iq->request_list[old].buf = NULL;
|
||||
iq->request_list[old].reqtype = 0;
|
||||
|
||||
skip_this:
|
||||
inst_count++;
|
||||
INCR_INDEX_BY1(old, iq->max_count);
|
||||
}
|
||||
if (bytes_compl)
|
||||
octeon_report_tx_completion_to_bql(iq->app_ctx, pkts_compl,
|
||||
bytes_compl);
|
||||
iq->flush_index = old;
|
||||
|
||||
return inst_count;
|
||||
}
|
||||
|
||||
static inline void
|
||||
update_iq_indices(struct octeon_device *oct, struct octeon_instr_queue *iq)
|
||||
{
|
||||
u32 inst_processed = 0;
|
||||
|
||||
/* Calculate how many commands Octeon has read and move the read index
|
||||
* accordingly.
|
||||
*/
|
||||
iq->octeon_read_index = oct->fn_list.update_iq_read_idx(oct, iq);
|
||||
|
||||
/* Move the NORESPONSE requests to the per-device completion list. */
|
||||
if (iq->flush_index != iq->octeon_read_index)
|
||||
inst_processed = lio_process_iq_request_list(oct, iq);
|
||||
|
||||
if (inst_processed)
|
||||
atomic_sub(inst_processed, &iq->instr_pending);
|
||||
iq->stats.instr_processed += inst_processed;
|
||||
}
|
||||
|
||||
static void
|
||||
octeon_flush_iq(struct octeon_device *oct, struct octeon_instr_queue *iq,
|
||||
u32 pending_thresh)
|
||||
{
|
||||
if (atomic_read(&iq->instr_pending) >= (s32)pending_thresh) {
|
||||
spin_lock_bh(&iq->lock);
|
||||
update_iq_indices(oct, iq);
|
||||
spin_unlock_bh(&iq->lock);
|
||||
}
|
||||
}
|
||||
|
||||
static void __check_db_timeout(struct octeon_device *oct, unsigned long iq_no)
|
||||
{
|
||||
struct octeon_instr_queue *iq;
|
||||
u64 next_time;
|
||||
|
||||
if (!oct)
|
||||
return;
|
||||
iq = oct->instr_queue[iq_no];
|
||||
if (!iq)
|
||||
return;
|
||||
|
||||
/* If jiffies - last_db_time < db_timeout do nothing */
|
||||
next_time = iq->last_db_time + iq->db_timeout;
|
||||
if (!time_after(jiffies, (unsigned long)next_time))
|
||||
return;
|
||||
iq->last_db_time = jiffies;
|
||||
|
||||
/* Get the lock and prevent tasklets. This routine gets called from
|
||||
* the poll thread. Instructions can now be posted in tasklet context
|
||||
*/
|
||||
spin_lock_bh(&iq->lock);
|
||||
if (iq->fill_cnt != 0)
|
||||
ring_doorbell(oct, iq);
|
||||
|
||||
spin_unlock_bh(&iq->lock);
|
||||
|
||||
/* Flush the instruction queue */
|
||||
if (iq->do_auto_flush)
|
||||
octeon_flush_iq(oct, iq, 1);
|
||||
}
|
||||
|
||||
/* Called by the Poll thread at regular intervals to check the instruction
|
||||
* queue for commands to be posted and for commands that were fetched by Octeon.
|
||||
*/
|
||||
static void check_db_timeout(struct work_struct *work)
|
||||
{
|
||||
struct cavium_wk *wk = (struct cavium_wk *)work;
|
||||
struct octeon_device *oct = (struct octeon_device *)wk->ctxptr;
|
||||
unsigned long iq_no = wk->ctxul;
|
||||
struct cavium_wq *db_wq = &oct->check_db_wq[iq_no];
|
||||
|
||||
__check_db_timeout(oct, iq_no);
|
||||
queue_delayed_work(db_wq->wq, &db_wq->wk.work, msecs_to_jiffies(1));
|
||||
}
|
||||
|
||||
int
|
||||
octeon_send_command(struct octeon_device *oct, u32 iq_no,
|
||||
u32 force_db, void *cmd, void *buf,
|
||||
u32 datasize, u32 reqtype)
|
||||
{
|
||||
struct iq_post_status st;
|
||||
struct octeon_instr_queue *iq = oct->instr_queue[iq_no];
|
||||
|
||||
spin_lock_bh(&iq->lock);
|
||||
|
||||
st = __post_command2(oct, iq, force_db, cmd);
|
||||
|
||||
if (st.status != IQ_SEND_FAILED) {
|
||||
octeon_report_sent_bytes_to_bql(buf, reqtype);
|
||||
__add_to_request_list(iq, st.index, buf, reqtype);
|
||||
INCR_INSTRQUEUE_PKT_COUNT(oct, iq_no, bytes_sent, datasize);
|
||||
INCR_INSTRQUEUE_PKT_COUNT(oct, iq_no, instr_posted, 1);
|
||||
|
||||
if (iq->fill_cnt >= iq->fill_threshold || force_db)
|
||||
ring_doorbell(oct, iq);
|
||||
} else {
|
||||
INCR_INSTRQUEUE_PKT_COUNT(oct, iq_no, instr_dropped, 1);
|
||||
}
|
||||
|
||||
spin_unlock_bh(&iq->lock);
|
||||
|
||||
if (iq->do_auto_flush)
|
||||
octeon_flush_iq(oct, iq, 2);
|
||||
|
||||
return st.status;
|
||||
}
|
||||
|
||||
void
|
||||
octeon_prepare_soft_command(struct octeon_device *oct,
|
||||
struct octeon_soft_command *sc,
|
||||
u8 opcode,
|
||||
u8 subcode,
|
||||
u32 irh_ossp,
|
||||
u64 ossp0,
|
||||
u64 ossp1)
|
||||
{
|
||||
struct octeon_config *oct_cfg;
|
||||
struct octeon_instr_ih *ih;
|
||||
struct octeon_instr_irh *irh;
|
||||
struct octeon_instr_rdp *rdp;
|
||||
|
||||
BUG_ON(opcode > 15);
|
||||
BUG_ON(subcode > 127);
|
||||
|
||||
oct_cfg = octeon_get_conf(oct);
|
||||
|
||||
ih = (struct octeon_instr_ih *)&sc->cmd.ih;
|
||||
ih->tagtype = ATOMIC_TAG;
|
||||
ih->tag = LIO_CONTROL;
|
||||
ih->raw = 1;
|
||||
ih->grp = CFG_GET_CTRL_Q_GRP(oct_cfg);
|
||||
|
||||
if (sc->datasize) {
|
||||
ih->dlengsz = sc->datasize;
|
||||
ih->rs = 1;
|
||||
}
|
||||
|
||||
irh = (struct octeon_instr_irh *)&sc->cmd.irh;
|
||||
irh->opcode = opcode;
|
||||
irh->subcode = subcode;
|
||||
|
||||
/* opcode/subcode specific parameters (ossp) */
|
||||
irh->ossp = irh_ossp;
|
||||
sc->cmd.ossp[0] = ossp0;
|
||||
sc->cmd.ossp[1] = ossp1;
|
||||
|
||||
if (sc->rdatasize) {
|
||||
rdp = (struct octeon_instr_rdp *)&sc->cmd.rdp;
|
||||
rdp->pcie_port = oct->pcie_port;
|
||||
rdp->rlen = sc->rdatasize;
|
||||
|
||||
irh->rflag = 1;
|
||||
irh->len = 4;
|
||||
ih->fsz = 40; /* irh+ossp[0]+ossp[1]+rdp+rptr = 40 bytes */
|
||||
} else {
|
||||
irh->rflag = 0;
|
||||
irh->len = 2;
|
||||
ih->fsz = 24; /* irh + ossp[0] + ossp[1] = 24 bytes */
|
||||
}
|
||||
|
||||
while (!(oct->io_qmask.iq & (1 << sc->iq_no)))
|
||||
sc->iq_no++;
|
||||
}
|
||||
|
||||
int octeon_send_soft_command(struct octeon_device *oct,
|
||||
struct octeon_soft_command *sc)
|
||||
{
|
||||
struct octeon_instr_ih *ih;
|
||||
struct octeon_instr_irh *irh;
|
||||
struct octeon_instr_rdp *rdp;
|
||||
|
||||
ih = (struct octeon_instr_ih *)&sc->cmd.ih;
|
||||
if (ih->dlengsz) {
|
||||
BUG_ON(!sc->dmadptr);
|
||||
sc->cmd.dptr = sc->dmadptr;
|
||||
}
|
||||
|
||||
irh = (struct octeon_instr_irh *)&sc->cmd.irh;
|
||||
if (irh->rflag) {
|
||||
BUG_ON(!sc->dmarptr);
|
||||
BUG_ON(!sc->status_word);
|
||||
*sc->status_word = COMPLETION_WORD_INIT;
|
||||
|
||||
rdp = (struct octeon_instr_rdp *)&sc->cmd.rdp;
|
||||
|
||||
sc->cmd.rptr = sc->dmarptr;
|
||||
}
|
||||
|
||||
if (sc->wait_time)
|
||||
sc->timeout = jiffies + sc->wait_time;
|
||||
|
||||
return octeon_send_command(oct, sc->iq_no, 1, &sc->cmd, sc,
|
||||
(u32)ih->dlengsz, REQTYPE_SOFT_COMMAND);
|
||||
}
|
||||
|
||||
int octeon_setup_sc_buffer_pool(struct octeon_device *oct)
|
||||
{
|
||||
int i;
|
||||
u64 dma_addr;
|
||||
struct octeon_soft_command *sc;
|
||||
|
||||
INIT_LIST_HEAD(&oct->sc_buf_pool.head);
|
||||
spin_lock_init(&oct->sc_buf_pool.lock);
|
||||
atomic_set(&oct->sc_buf_pool.alloc_buf_count, 0);
|
||||
|
||||
for (i = 0; i < MAX_SOFT_COMMAND_BUFFERS; i++) {
|
||||
sc = (struct octeon_soft_command *)
|
||||
lio_dma_alloc(oct,
|
||||
SOFT_COMMAND_BUFFER_SIZE,
|
||||
(dma_addr_t *)&dma_addr);
|
||||
if (!sc)
|
||||
return 1;
|
||||
|
||||
sc->dma_addr = dma_addr;
|
||||
sc->size = SOFT_COMMAND_BUFFER_SIZE;
|
||||
|
||||
list_add_tail(&sc->node, &oct->sc_buf_pool.head);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int octeon_free_sc_buffer_pool(struct octeon_device *oct)
|
||||
{
|
||||
struct list_head *tmp, *tmp2;
|
||||
struct octeon_soft_command *sc;
|
||||
|
||||
spin_lock(&oct->sc_buf_pool.lock);
|
||||
|
||||
list_for_each_safe(tmp, tmp2, &oct->sc_buf_pool.head) {
|
||||
list_del(tmp);
|
||||
|
||||
sc = (struct octeon_soft_command *)tmp;
|
||||
|
||||
lio_dma_free(oct, sc->size, sc, sc->dma_addr);
|
||||
}
|
||||
|
||||
INIT_LIST_HEAD(&oct->sc_buf_pool.head);
|
||||
|
||||
spin_unlock(&oct->sc_buf_pool.lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct octeon_soft_command *octeon_alloc_soft_command(struct octeon_device *oct,
|
||||
u32 datasize,
|
||||
u32 rdatasize,
|
||||
u32 ctxsize)
|
||||
{
|
||||
u64 dma_addr;
|
||||
u32 size;
|
||||
u32 offset = sizeof(struct octeon_soft_command);
|
||||
struct octeon_soft_command *sc = NULL;
|
||||
struct list_head *tmp;
|
||||
|
||||
BUG_ON((offset + datasize + rdatasize + ctxsize) >
|
||||
SOFT_COMMAND_BUFFER_SIZE);
|
||||
|
||||
spin_lock(&oct->sc_buf_pool.lock);
|
||||
|
||||
if (list_empty(&oct->sc_buf_pool.head)) {
|
||||
spin_unlock(&oct->sc_buf_pool.lock);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
list_for_each(tmp, &oct->sc_buf_pool.head)
|
||||
break;
|
||||
|
||||
list_del(tmp);
|
||||
|
||||
atomic_inc(&oct->sc_buf_pool.alloc_buf_count);
|
||||
|
||||
spin_unlock(&oct->sc_buf_pool.lock);
|
||||
|
||||
sc = (struct octeon_soft_command *)tmp;
|
||||
|
||||
dma_addr = sc->dma_addr;
|
||||
size = sc->size;
|
||||
|
||||
memset(sc, 0, sc->size);
|
||||
|
||||
sc->dma_addr = dma_addr;
|
||||
sc->size = size;
|
||||
|
||||
if (ctxsize) {
|
||||
sc->ctxptr = (u8 *)sc + offset;
|
||||
sc->ctxsize = ctxsize;
|
||||
}
|
||||
|
||||
/* Start data at 128 byte boundary */
|
||||
offset = (offset + ctxsize + 127) & 0xffffff80;
|
||||
|
||||
if (datasize) {
|
||||
sc->virtdptr = (u8 *)sc + offset;
|
||||
sc->dmadptr = dma_addr + offset;
|
||||
sc->datasize = datasize;
|
||||
}
|
||||
|
||||
/* Start rdata at 128 byte boundary */
|
||||
offset = (offset + datasize + 127) & 0xffffff80;
|
||||
|
||||
if (rdatasize) {
|
||||
BUG_ON(rdatasize < 16);
|
||||
sc->virtrptr = (u8 *)sc + offset;
|
||||
sc->dmarptr = dma_addr + offset;
|
||||
sc->rdatasize = rdatasize;
|
||||
sc->status_word = (u64 *)((u8 *)(sc->virtrptr) + rdatasize - 8);
|
||||
}
|
||||
|
||||
return sc;
|
||||
}
|
||||
|
||||
void octeon_free_soft_command(struct octeon_device *oct,
|
||||
struct octeon_soft_command *sc)
|
||||
{
|
||||
spin_lock(&oct->sc_buf_pool.lock);
|
||||
|
||||
list_add_tail(&sc->node, &oct->sc_buf_pool.head);
|
||||
|
||||
atomic_dec(&oct->sc_buf_pool.alloc_buf_count);
|
||||
|
||||
spin_unlock(&oct->sc_buf_pool.lock);
|
||||
}
|
178
drivers/net/ethernet/cavium/liquidio/response_manager.c
Normal file
178
drivers/net/ethernet/cavium/liquidio/response_manager.c
Normal file
|
@ -0,0 +1,178 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
#include <linux/version.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include "octeon_config.h"
|
||||
#include "liquidio_common.h"
|
||||
#include "octeon_droq.h"
|
||||
#include "octeon_iq.h"
|
||||
#include "response_manager.h"
|
||||
#include "octeon_device.h"
|
||||
#include "octeon_nic.h"
|
||||
#include "octeon_main.h"
|
||||
#include "octeon_network.h"
|
||||
#include "cn66xx_regs.h"
|
||||
#include "cn66xx_device.h"
|
||||
#include "cn68xx_regs.h"
|
||||
#include "cn68xx_device.h"
|
||||
#include "liquidio_image.h"
|
||||
|
||||
static void oct_poll_req_completion(struct work_struct *work);
|
||||
|
||||
int octeon_setup_response_list(struct octeon_device *oct)
|
||||
{
|
||||
int i, ret = 0;
|
||||
struct cavium_wq *cwq;
|
||||
|
||||
for (i = 0; i < MAX_RESPONSE_LISTS; i++) {
|
||||
INIT_LIST_HEAD(&oct->response_list[i].head);
|
||||
spin_lock_init(&oct->response_list[i].lock);
|
||||
atomic_set(&oct->response_list[i].pending_req_count, 0);
|
||||
}
|
||||
|
||||
oct->dma_comp_wq.wq = create_workqueue("dma-comp");
|
||||
if (!oct->dma_comp_wq.wq) {
|
||||
dev_err(&oct->pci_dev->dev, "failed to create wq thread\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
cwq = &oct->dma_comp_wq;
|
||||
INIT_DELAYED_WORK(&cwq->wk.work, oct_poll_req_completion);
|
||||
cwq->wk.ctxptr = oct;
|
||||
queue_delayed_work(cwq->wq, &cwq->wk.work, msecs_to_jiffies(100));
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
void octeon_delete_response_list(struct octeon_device *oct)
|
||||
{
|
||||
cancel_delayed_work_sync(&oct->dma_comp_wq.wk.work);
|
||||
flush_workqueue(oct->dma_comp_wq.wq);
|
||||
destroy_workqueue(oct->dma_comp_wq.wq);
|
||||
}
|
||||
|
||||
int lio_process_ordered_list(struct octeon_device *octeon_dev,
|
||||
u32 force_quit)
|
||||
{
|
||||
struct octeon_response_list *ordered_sc_list;
|
||||
struct octeon_soft_command *sc;
|
||||
int request_complete = 0;
|
||||
int resp_to_process = MAX_ORD_REQS_TO_PROCESS;
|
||||
u32 status;
|
||||
u64 status64;
|
||||
struct octeon_instr_rdp *rdp;
|
||||
|
||||
ordered_sc_list = &octeon_dev->response_list[OCTEON_ORDERED_SC_LIST];
|
||||
|
||||
do {
|
||||
spin_lock_bh(&ordered_sc_list->lock);
|
||||
|
||||
if (ordered_sc_list->head.next == &ordered_sc_list->head) {
|
||||
/* ordered_sc_list is empty; there is
|
||||
* nothing to process
|
||||
*/
|
||||
spin_unlock_bh
|
||||
(&ordered_sc_list->lock);
|
||||
return 1;
|
||||
}
|
||||
|
||||
sc = (struct octeon_soft_command *)ordered_sc_list->
|
||||
head.next;
|
||||
rdp = (struct octeon_instr_rdp *)&sc->cmd.rdp;
|
||||
|
||||
status = OCTEON_REQUEST_PENDING;
|
||||
|
||||
/* check if octeon has finished DMA'ing a response
|
||||
* to where rptr is pointing to
|
||||
*/
|
||||
dma_sync_single_for_cpu(&octeon_dev->pci_dev->dev,
|
||||
sc->cmd.rptr, rdp->rlen,
|
||||
DMA_FROM_DEVICE);
|
||||
status64 = *sc->status_word;
|
||||
|
||||
if (status64 != COMPLETION_WORD_INIT) {
|
||||
if ((status64 & 0xff) != 0xff) {
|
||||
octeon_swap_8B_data(&status64, 1);
|
||||
if (((status64 & 0xff) != 0xff)) {
|
||||
status = (u32)(status64 &
|
||||
0xffffffffULL);
|
||||
}
|
||||
}
|
||||
} else if (force_quit || (sc->timeout &&
|
||||
time_after(jiffies, (unsigned long)sc->timeout))) {
|
||||
status = OCTEON_REQUEST_TIMEOUT;
|
||||
}
|
||||
|
||||
if (status != OCTEON_REQUEST_PENDING) {
|
||||
/* we have received a response or we have timed out */
|
||||
/* remove node from linked list */
|
||||
list_del(&sc->node);
|
||||
atomic_dec(&octeon_dev->response_list
|
||||
[OCTEON_ORDERED_SC_LIST].
|
||||
pending_req_count);
|
||||
spin_unlock_bh
|
||||
(&ordered_sc_list->lock);
|
||||
|
||||
if (sc->callback)
|
||||
sc->callback(octeon_dev, status,
|
||||
sc->callback_arg);
|
||||
|
||||
request_complete++;
|
||||
|
||||
} else {
|
||||
/* no response yet */
|
||||
request_complete = 0;
|
||||
spin_unlock_bh
|
||||
(&ordered_sc_list->lock);
|
||||
}
|
||||
|
||||
/* If we hit the Max Ordered requests to process every loop,
|
||||
* we quit
|
||||
* and let this function be invoked the next time the poll
|
||||
* thread runs
|
||||
* to process the remaining requests. This function can take up
|
||||
* the entire CPU if there is no upper limit to the requests
|
||||
* processed.
|
||||
*/
|
||||
if (request_complete >= resp_to_process)
|
||||
break;
|
||||
} while (request_complete);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void oct_poll_req_completion(struct work_struct *work)
|
||||
{
|
||||
struct cavium_wk *wk = (struct cavium_wk *)work;
|
||||
struct octeon_device *oct = (struct octeon_device *)wk->ctxptr;
|
||||
struct cavium_wq *cwq = &oct->dma_comp_wq;
|
||||
|
||||
lio_process_ordered_list(oct, 0);
|
||||
|
||||
queue_delayed_work(cwq->wq, &cwq->wk.work, msecs_to_jiffies(100));
|
||||
}
|
140
drivers/net/ethernet/cavium/liquidio/response_manager.h
Normal file
140
drivers/net/ethernet/cavium/liquidio/response_manager.h
Normal file
|
@ -0,0 +1,140 @@
|
|||
/**********************************************************************
|
||||
* Author: Cavium, Inc.
|
||||
*
|
||||
* Contact: support@cavium.com
|
||||
* Please include "LiquidIO" in the subject.
|
||||
*
|
||||
* Copyright (c) 2003-2015 Cavium, Inc.
|
||||
*
|
||||
* This file is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License, Version 2, as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This file is distributed in the hope that it will be useful, but
|
||||
* AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
|
||||
* NONINFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* This file may also be available under a different license from Cavium.
|
||||
* Contact Cavium, Inc. for more information
|
||||
**********************************************************************/
|
||||
|
||||
/*! \file response_manager.h
|
||||
* \brief Host Driver: Response queues for host instructions.
|
||||
*/
|
||||
|
||||
#ifndef __RESPONSE_MANAGER_H__
|
||||
#define __RESPONSE_MANAGER_H__
|
||||
|
||||
/** Maximum ordered requests to process in every invocation of
|
||||
* lio_process_ordered_list(). The function will continue to process requests
|
||||
* as long as it can find one that has finished processing. If it keeps
|
||||
* finding requests that have completed, the function can run for ever. The
|
||||
* value defined here sets an upper limit on the number of requests it can
|
||||
* process before it returns control to the poll thread.
|
||||
*/
|
||||
#define MAX_ORD_REQS_TO_PROCESS 4096
|
||||
|
||||
/** Head of a response list. There are several response lists in the
|
||||
* system. One for each response order- Unordered, ordered
|
||||
* and 1 for noresponse entries on each instruction queue.
|
||||
*/
|
||||
struct octeon_response_list {
|
||||
/** List structure to add delete pending entries to */
|
||||
struct list_head head;
|
||||
|
||||
/** A lock for this response list */
|
||||
spinlock_t lock;
|
||||
|
||||
atomic_t pending_req_count;
|
||||
};
|
||||
|
||||
/** The type of response list.
|
||||
*/
|
||||
enum {
|
||||
OCTEON_ORDERED_LIST = 0,
|
||||
OCTEON_UNORDERED_NONBLOCKING_LIST = 1,
|
||||
OCTEON_UNORDERED_BLOCKING_LIST = 2,
|
||||
OCTEON_ORDERED_SC_LIST = 3
|
||||
};
|
||||
|
||||
/** Response Order values for a Octeon Request. */
|
||||
enum {
|
||||
OCTEON_RESP_ORDERED = 0,
|
||||
OCTEON_RESP_UNORDERED = 1,
|
||||
OCTEON_RESP_NORESPONSE = 2
|
||||
};
|
||||
|
||||
/** Error codes used in Octeon Host-Core communication.
|
||||
*
|
||||
* 31 16 15 0
|
||||
* ---------------------------------
|
||||
* | | |
|
||||
* ---------------------------------
|
||||
* Error codes are 32-bit wide. The upper 16-bits, called Major Error Number,
|
||||
* are reserved to identify the group to which the error code belongs. The
|
||||
* lower 16-bits, called Minor Error Number, carry the actual code.
|
||||
*
|
||||
* So error codes are (MAJOR NUMBER << 16)| MINOR_NUMBER.
|
||||
*/
|
||||
|
||||
/*------------ Error codes used by host driver -----------------*/
|
||||
#define DRIVER_MAJOR_ERROR_CODE 0x0000
|
||||
|
||||
/** A value of 0x00000000 indicates no error i.e. success */
|
||||
#define DRIVER_ERROR_NONE 0x00000000
|
||||
|
||||
/** (Major number: 0x0000; Minor Number: 0x0001) */
|
||||
#define DRIVER_ERROR_REQ_PENDING 0x00000001
|
||||
#define DRIVER_ERROR_REQ_TIMEOUT 0x00000003
|
||||
#define DRIVER_ERROR_REQ_EINTR 0x00000004
|
||||
#define DRIVER_ERROR_REQ_ENXIO 0x00000006
|
||||
#define DRIVER_ERROR_REQ_ENOMEM 0x0000000C
|
||||
#define DRIVER_ERROR_REQ_EINVAL 0x00000016
|
||||
#define DRIVER_ERROR_REQ_FAILED 0x000000ff
|
||||
|
||||
/** Status for a request.
|
||||
* If a request is not queued to Octeon by the driver, the driver returns
|
||||
* an error condition that's describe by one of the OCTEON_REQ_ERR_* value
|
||||
* below. If the request is successfully queued, the driver will return
|
||||
* a OCTEON_REQUEST_PENDING status. OCTEON_REQUEST_TIMEOUT and
|
||||
* OCTEON_REQUEST_INTERRUPTED are only returned by the driver if the
|
||||
* response for request failed to arrive before a time-out period or if
|
||||
* the request processing * got interrupted due to a signal respectively.
|
||||
*/
|
||||
enum {
|
||||
OCTEON_REQUEST_DONE = (DRIVER_ERROR_NONE),
|
||||
OCTEON_REQUEST_PENDING = (DRIVER_ERROR_REQ_PENDING),
|
||||
OCTEON_REQUEST_TIMEOUT = (DRIVER_ERROR_REQ_TIMEOUT),
|
||||
OCTEON_REQUEST_INTERRUPTED = (DRIVER_ERROR_REQ_EINTR),
|
||||
OCTEON_REQUEST_NO_DEVICE = (0x00000021),
|
||||
OCTEON_REQUEST_NOT_RUNNING,
|
||||
OCTEON_REQUEST_INVALID_IQ,
|
||||
OCTEON_REQUEST_INVALID_BUFCNT,
|
||||
OCTEON_REQUEST_INVALID_RESP_ORDER,
|
||||
OCTEON_REQUEST_NO_MEMORY,
|
||||
OCTEON_REQUEST_INVALID_BUFSIZE,
|
||||
OCTEON_REQUEST_NO_PENDING_ENTRY,
|
||||
OCTEON_REQUEST_NO_IQ_SPACE = (0x7FFFFFFF)
|
||||
|
||||
};
|
||||
|
||||
/** Initialize the response lists. The number of response lists to create is
|
||||
* given by count.
|
||||
* @param octeon_dev - the octeon device structure.
|
||||
*/
|
||||
int octeon_setup_response_list(struct octeon_device *octeon_dev);
|
||||
|
||||
void octeon_delete_response_list(struct octeon_device *octeon_dev);
|
||||
|
||||
/** Check the status of first entry in the ordered list. If the instruction at
|
||||
* that entry finished processing or has timed-out, the entry is cleaned.
|
||||
* @param octeon_dev - the octeon device structure.
|
||||
* @param force_quit - the request is forced to timeout if this is 1
|
||||
* @return 1 if the ordered list is empty, 0 otherwise.
|
||||
*/
|
||||
int lio_process_ordered_list(struct octeon_device *octeon_dev,
|
||||
u32 force_quit);
|
||||
|
||||
#endif
|
Loading…
Reference in a new issue