Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller: 1) Fix uninitialized variable warnings in nfnetlink_queue, a lot of people reported this... From Arnd Bergmann. 2) Don't init mutex twice in i40e driver, from Jesse Brandeburg. 3) Fix spurious EBUSY in rhashtable, from Herbert Xu. 4) Missing DMA unmaps in mvpp2 driver, from Marcin Wojtas. 5) Fix race with work structure access in pppoe driver causing corruptions, from Guillaume Nault. 6) Fix OOPS due to sh_eth_rx() not checking whether netdev_alloc_skb() actually succeeded or not, from Sergei Shtylyov. 7) Don't lose flags when settifn IFA_F_OPTIMISTIC in ipv6 code, from Bjørn Mork. 8) VXLAN_HD_RCO defined incorrectly, fix from Jiri Benc. 9) Fix clock source used for cookies in SCTP, from Marcelo Ricardo Leitner. 10) aurora driver needs HAS_DMA dependency, from Geert Uytterhoeven. 11) ndo_fill_metadata_dst op of vxlan has to handle ipv6 tunneling properly as well, from Jiri Benc. 12) Handle request sockets properly in xfrm layer, from Eric Dumazet. 13) Double stats update in ipv6 geneve transmit path, fix from Pravin B Shelar. 14) sk->sk_policy[] needs RCU protection, and as a result xfrm_policy_destroy() needs to free policies using an RCU grace period, from Eric Dumazet. 15) SCTP needs to clone ipv6 tx options in order to avoid use after free, from Eric Dumazet. 16) Missing kbuild export if ila.h, from Stephen Hemminger. 17) Missing mdiobus_alloc() return value checking in mdio-mux.c, from Tobias Klauser. 18) Validate protocol value range in ->create() methods, from Hannes Frederic Sowa. 19) Fix early socket demux races that result in illegal dst reuse, from Eric Dumazet. 20) Validate socket address length in pptp code, from WANG Cong. 21) skb_reorder_vlan_header() uses incorrect offset and can corrupt packets, from Vlad Yasevich. 22) Fix memory leaks in nl80211 registry code, from Ola Olsson. 23) Timeout loop count handing fixes in mISDN, xgbe, qlge, sfc, and qlcnic. From Dan Carpenter. 24) msg.msg_iocb needs to be cleared in recvfrom() otherwise, for example, AF_ALG will interpret it as an async call. From Tadeusz Struk. 25) inetpeer_set_addr_v4 forgets to initialize the 'vif' field, from Eric Dumazet. 26) rhashtable enforces the minimum table size not early enough, breaking how we calculate the per-cpu lock allocations. From Herbert Xu. 27) Fix FCC port lockup in 82xx driver, from Martin Roth. 28) FOU sockets need to be freed using RCU, from Hannes Frederic Sowa. 29) Fix out-of-bounds access in __skb_complete_tx_timestamp() and sock_setsockopt() wrt. timestamp handling. From WANG Cong. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (117 commits) net: check both type and procotol for tcp sockets drivers: net: xgene: fix Tx flow control tcp: restore fastopen with no data in SYN packet af_unix: Revert 'lock_interruptible' in stream receive code fou: clean up socket with kfree_rcu 82xx: FCC: Fixing a bug causing to FCC port lock-up gianfar: Don't enable RX Filer if not supported net: fix warnings in 'make htmldocs' by moving macro definition out of field declaration rhashtable: Fix walker list corruption rhashtable: Enforce minimum size on initial hash table inet: tcp: fix inetpeer_set_addr_v4() ipv6: automatically enable stable privacy mode if stable_secret set net: fix uninitialized variable issue bluetooth: Validate socket address length in sco_sock_bind(). net_sched: make qdisc_tree_decrease_qlen() work for non mq ser_gigaset: remove unnecessary kfree() calls from release method ser_gigaset: fix deallocation of platform device structure ser_gigaset: turn nonsense checks into WARN_ON ser_gigaset: fix up NULL checks qlcnic: fix a timeout loop ...
This commit is contained in:
commit
73796d8bf2
115 changed files with 1049 additions and 675 deletions
|
@ -181,17 +181,3 @@ For general information, go to the Intel support website at:
|
||||||
If an issue is identified with the released source code on the supported
|
If an issue is identified with the released source code on the supported
|
||||||
kernel with a supported adapter, email the specific information related to the
|
kernel with a supported adapter, email the specific information related to the
|
||||||
issue to e1000-devel@lists.sourceforge.net.
|
issue to e1000-devel@lists.sourceforge.net.
|
||||||
|
|
||||||
|
|
||||||
License
|
|
||||||
=======
|
|
||||||
|
|
||||||
This software program is released under the terms of a license agreement
|
|
||||||
between you ('Licensee') and Intel. Do not use or load this software or any
|
|
||||||
associated materials (collectively, the 'Software') until you have carefully
|
|
||||||
read the full terms and conditions of the file COPYING located in this software
|
|
||||||
package. By loading or using the Software, you agree to the terms of this
|
|
||||||
Agreement. If you do not agree with the terms of this Agreement, do not install
|
|
||||||
or use the Software.
|
|
||||||
|
|
||||||
* Other names and brands may be claimed as the property of others.
|
|
||||||
|
|
|
@ -5578,7 +5578,7 @@ R: Jesse Brandeburg <jesse.brandeburg@intel.com>
|
||||||
R: Shannon Nelson <shannon.nelson@intel.com>
|
R: Shannon Nelson <shannon.nelson@intel.com>
|
||||||
R: Carolyn Wyborny <carolyn.wyborny@intel.com>
|
R: Carolyn Wyborny <carolyn.wyborny@intel.com>
|
||||||
R: Don Skidmore <donald.c.skidmore@intel.com>
|
R: Don Skidmore <donald.c.skidmore@intel.com>
|
||||||
R: Matthew Vick <matthew.vick@intel.com>
|
R: Bruce Allan <bruce.w.allan@intel.com>
|
||||||
R: John Ronciak <john.ronciak@intel.com>
|
R: John Ronciak <john.ronciak@intel.com>
|
||||||
R: Mitch Williams <mitch.a.williams@intel.com>
|
R: Mitch Williams <mitch.a.williams@intel.com>
|
||||||
L: intel-wired-lan@lists.osuosl.org
|
L: intel-wired-lan@lists.osuosl.org
|
||||||
|
@ -8946,6 +8946,13 @@ F: drivers/rpmsg/
|
||||||
F: Documentation/rpmsg.txt
|
F: Documentation/rpmsg.txt
|
||||||
F: include/linux/rpmsg.h
|
F: include/linux/rpmsg.h
|
||||||
|
|
||||||
|
RENESAS ETHERNET DRIVERS
|
||||||
|
R: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
|
||||||
|
L: netdev@vger.kernel.org
|
||||||
|
L: linux-sh@vger.kernel.org
|
||||||
|
F: drivers/net/ethernet/renesas/
|
||||||
|
F: include/linux/sh_eth.h
|
||||||
|
|
||||||
RESET CONTROLLER FRAMEWORK
|
RESET CONTROLLER FRAMEWORK
|
||||||
M: Philipp Zabel <p.zabel@pengutronix.de>
|
M: Philipp Zabel <p.zabel@pengutronix.de>
|
||||||
S: Maintained
|
S: Maintained
|
||||||
|
|
|
@ -67,8 +67,7 @@ static int write_modem(struct cardstate *cs)
|
||||||
struct sk_buff *skb = bcs->tx_skb;
|
struct sk_buff *skb = bcs->tx_skb;
|
||||||
int sent = -EOPNOTSUPP;
|
int sent = -EOPNOTSUPP;
|
||||||
|
|
||||||
if (!tty || !tty->driver || !skb)
|
WARN_ON(!tty || !tty->ops || !skb);
|
||||||
return -EINVAL;
|
|
||||||
|
|
||||||
if (!skb->len) {
|
if (!skb->len) {
|
||||||
dev_kfree_skb_any(skb);
|
dev_kfree_skb_any(skb);
|
||||||
|
@ -109,8 +108,7 @@ static int send_cb(struct cardstate *cs)
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
int sent = 0;
|
int sent = 0;
|
||||||
|
|
||||||
if (!tty || !tty->driver)
|
WARN_ON(!tty || !tty->ops);
|
||||||
return -EFAULT;
|
|
||||||
|
|
||||||
cb = cs->cmdbuf;
|
cb = cs->cmdbuf;
|
||||||
if (!cb)
|
if (!cb)
|
||||||
|
@ -370,19 +368,18 @@ static void gigaset_freecshw(struct cardstate *cs)
|
||||||
tasklet_kill(&cs->write_tasklet);
|
tasklet_kill(&cs->write_tasklet);
|
||||||
if (!cs->hw.ser)
|
if (!cs->hw.ser)
|
||||||
return;
|
return;
|
||||||
dev_set_drvdata(&cs->hw.ser->dev.dev, NULL);
|
|
||||||
platform_device_unregister(&cs->hw.ser->dev);
|
platform_device_unregister(&cs->hw.ser->dev);
|
||||||
kfree(cs->hw.ser);
|
|
||||||
cs->hw.ser = NULL;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void gigaset_device_release(struct device *dev)
|
static void gigaset_device_release(struct device *dev)
|
||||||
{
|
{
|
||||||
struct platform_device *pdev = to_platform_device(dev);
|
struct cardstate *cs = dev_get_drvdata(dev);
|
||||||
|
|
||||||
/* adapted from platform_device_release() in drivers/base/platform.c */
|
if (!cs)
|
||||||
kfree(dev->platform_data);
|
return;
|
||||||
kfree(pdev->resource);
|
dev_set_drvdata(dev, NULL);
|
||||||
|
kfree(cs->hw.ser);
|
||||||
|
cs->hw.ser = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -432,7 +429,9 @@ static int gigaset_set_modem_ctrl(struct cardstate *cs, unsigned old_state,
|
||||||
struct tty_struct *tty = cs->hw.ser->tty;
|
struct tty_struct *tty = cs->hw.ser->tty;
|
||||||
unsigned int set, clear;
|
unsigned int set, clear;
|
||||||
|
|
||||||
if (!tty || !tty->driver || !tty->ops->tiocmset)
|
WARN_ON(!tty || !tty->ops);
|
||||||
|
/* tiocmset is an optional tty driver method */
|
||||||
|
if (!tty->ops->tiocmset)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
set = new_state & ~old_state;
|
set = new_state & ~old_state;
|
||||||
clear = old_state & ~new_state;
|
clear = old_state & ~new_state;
|
||||||
|
|
|
@ -1170,7 +1170,7 @@ mISDNipac_irq(struct ipac_hw *ipac, int maxloop)
|
||||||
|
|
||||||
if (ipac->type & IPAC_TYPE_IPACX) {
|
if (ipac->type & IPAC_TYPE_IPACX) {
|
||||||
ista = ReadIPAC(ipac, ISACX_ISTA);
|
ista = ReadIPAC(ipac, ISACX_ISTA);
|
||||||
while (ista && cnt--) {
|
while (ista && --cnt) {
|
||||||
pr_debug("%s: ISTA %02x\n", ipac->name, ista);
|
pr_debug("%s: ISTA %02x\n", ipac->name, ista);
|
||||||
if (ista & IPACX__ICA)
|
if (ista & IPACX__ICA)
|
||||||
ipac_irq(&ipac->hscx[0], ista);
|
ipac_irq(&ipac->hscx[0], ista);
|
||||||
|
@ -1182,7 +1182,7 @@ mISDNipac_irq(struct ipac_hw *ipac, int maxloop)
|
||||||
}
|
}
|
||||||
} else if (ipac->type & IPAC_TYPE_IPAC) {
|
} else if (ipac->type & IPAC_TYPE_IPAC) {
|
||||||
ista = ReadIPAC(ipac, IPAC_ISTA);
|
ista = ReadIPAC(ipac, IPAC_ISTA);
|
||||||
while (ista && cnt--) {
|
while (ista && --cnt) {
|
||||||
pr_debug("%s: ISTA %02x\n", ipac->name, ista);
|
pr_debug("%s: ISTA %02x\n", ipac->name, ista);
|
||||||
if (ista & (IPAC__ICD | IPAC__EXD)) {
|
if (ista & (IPAC__ICD | IPAC__EXD)) {
|
||||||
istad = ReadISAC(isac, ISAC_ISTA);
|
istad = ReadISAC(isac, ISAC_ISTA);
|
||||||
|
@ -1200,7 +1200,7 @@ mISDNipac_irq(struct ipac_hw *ipac, int maxloop)
|
||||||
ista = ReadIPAC(ipac, IPAC_ISTA);
|
ista = ReadIPAC(ipac, IPAC_ISTA);
|
||||||
}
|
}
|
||||||
} else if (ipac->type & IPAC_TYPE_HSCX) {
|
} else if (ipac->type & IPAC_TYPE_HSCX) {
|
||||||
while (cnt) {
|
while (--cnt) {
|
||||||
ista = ReadIPAC(ipac, IPAC_ISTAB + ipac->hscx[1].off);
|
ista = ReadIPAC(ipac, IPAC_ISTAB + ipac->hscx[1].off);
|
||||||
pr_debug("%s: B2 ISTA %02x\n", ipac->name, ista);
|
pr_debug("%s: B2 ISTA %02x\n", ipac->name, ista);
|
||||||
if (ista)
|
if (ista)
|
||||||
|
@ -1211,7 +1211,6 @@ mISDNipac_irq(struct ipac_hw *ipac, int maxloop)
|
||||||
mISDNisac_irq(isac, istad);
|
mISDNisac_irq(isac, istad);
|
||||||
if (0 == (ista | istad))
|
if (0 == (ista | istad))
|
||||||
break;
|
break;
|
||||||
cnt--;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (cnt > maxloop) /* only for ISAC/HSCX without PCI IRQ test */
|
if (cnt > maxloop) /* only for ISAC/HSCX without PCI IRQ test */
|
||||||
|
|
|
@ -1849,7 +1849,7 @@ static int xgbe_exit(struct xgbe_prv_data *pdata)
|
||||||
usleep_range(10, 15);
|
usleep_range(10, 15);
|
||||||
|
|
||||||
/* Poll Until Poll Condition */
|
/* Poll Until Poll Condition */
|
||||||
while (count-- && XGMAC_IOREAD_BITS(pdata, DMA_MR, SWR))
|
while (--count && XGMAC_IOREAD_BITS(pdata, DMA_MR, SWR))
|
||||||
usleep_range(500, 600);
|
usleep_range(500, 600);
|
||||||
|
|
||||||
if (!count)
|
if (!count)
|
||||||
|
@ -1873,7 +1873,7 @@ static int xgbe_flush_tx_queues(struct xgbe_prv_data *pdata)
|
||||||
/* Poll Until Poll Condition */
|
/* Poll Until Poll Condition */
|
||||||
for (i = 0; i < pdata->tx_q_count; i++) {
|
for (i = 0; i < pdata->tx_q_count; i++) {
|
||||||
count = 2000;
|
count = 2000;
|
||||||
while (count-- && XGMAC_MTL_IOREAD_BITS(pdata, i,
|
while (--count && XGMAC_MTL_IOREAD_BITS(pdata, i,
|
||||||
MTL_Q_TQOMR, FTQ))
|
MTL_Q_TQOMR, FTQ))
|
||||||
usleep_range(500, 600);
|
usleep_range(500, 600);
|
||||||
|
|
||||||
|
|
|
@ -289,6 +289,7 @@ static int xgene_enet_setup_tx_desc(struct xgene_enet_desc_ring *tx_ring,
|
||||||
struct sk_buff *skb)
|
struct sk_buff *skb)
|
||||||
{
|
{
|
||||||
struct device *dev = ndev_to_dev(tx_ring->ndev);
|
struct device *dev = ndev_to_dev(tx_ring->ndev);
|
||||||
|
struct xgene_enet_pdata *pdata = netdev_priv(tx_ring->ndev);
|
||||||
struct xgene_enet_raw_desc *raw_desc;
|
struct xgene_enet_raw_desc *raw_desc;
|
||||||
__le64 *exp_desc = NULL, *exp_bufs = NULL;
|
__le64 *exp_desc = NULL, *exp_bufs = NULL;
|
||||||
dma_addr_t dma_addr, pbuf_addr, *frag_dma_addr;
|
dma_addr_t dma_addr, pbuf_addr, *frag_dma_addr;
|
||||||
|
@ -419,6 +420,7 @@ static int xgene_enet_setup_tx_desc(struct xgene_enet_desc_ring *tx_ring,
|
||||||
raw_desc->m0 = cpu_to_le64(SET_VAL(LL, ll) | SET_VAL(NV, nv) |
|
raw_desc->m0 = cpu_to_le64(SET_VAL(LL, ll) | SET_VAL(NV, nv) |
|
||||||
SET_VAL(USERINFO, tx_ring->tail));
|
SET_VAL(USERINFO, tx_ring->tail));
|
||||||
tx_ring->cp_ring->cp_skb[tx_ring->tail] = skb;
|
tx_ring->cp_ring->cp_skb[tx_ring->tail] = skb;
|
||||||
|
pdata->tx_level += count;
|
||||||
tx_ring->tail = tail;
|
tx_ring->tail = tail;
|
||||||
|
|
||||||
return count;
|
return count;
|
||||||
|
@ -429,14 +431,13 @@ static netdev_tx_t xgene_enet_start_xmit(struct sk_buff *skb,
|
||||||
{
|
{
|
||||||
struct xgene_enet_pdata *pdata = netdev_priv(ndev);
|
struct xgene_enet_pdata *pdata = netdev_priv(ndev);
|
||||||
struct xgene_enet_desc_ring *tx_ring = pdata->tx_ring;
|
struct xgene_enet_desc_ring *tx_ring = pdata->tx_ring;
|
||||||
struct xgene_enet_desc_ring *cp_ring = tx_ring->cp_ring;
|
u32 tx_level = pdata->tx_level;
|
||||||
u32 tx_level, cq_level;
|
|
||||||
int count;
|
int count;
|
||||||
|
|
||||||
tx_level = pdata->ring_ops->len(tx_ring);
|
if (tx_level < pdata->txc_level)
|
||||||
cq_level = pdata->ring_ops->len(cp_ring);
|
tx_level += ((typeof(pdata->tx_level))~0U);
|
||||||
if (unlikely(tx_level > pdata->tx_qcnt_hi ||
|
|
||||||
cq_level > pdata->cp_qcnt_hi)) {
|
if ((tx_level - pdata->txc_level) > pdata->tx_qcnt_hi) {
|
||||||
netif_stop_queue(ndev);
|
netif_stop_queue(ndev);
|
||||||
return NETDEV_TX_BUSY;
|
return NETDEV_TX_BUSY;
|
||||||
}
|
}
|
||||||
|
@ -539,10 +540,13 @@ static int xgene_enet_process_ring(struct xgene_enet_desc_ring *ring,
|
||||||
struct xgene_enet_raw_desc *raw_desc, *exp_desc;
|
struct xgene_enet_raw_desc *raw_desc, *exp_desc;
|
||||||
u16 head = ring->head;
|
u16 head = ring->head;
|
||||||
u16 slots = ring->slots - 1;
|
u16 slots = ring->slots - 1;
|
||||||
int ret, count = 0, processed = 0;
|
int ret, desc_count, count = 0, processed = 0;
|
||||||
|
bool is_completion;
|
||||||
|
|
||||||
do {
|
do {
|
||||||
raw_desc = &ring->raw_desc[head];
|
raw_desc = &ring->raw_desc[head];
|
||||||
|
desc_count = 0;
|
||||||
|
is_completion = false;
|
||||||
exp_desc = NULL;
|
exp_desc = NULL;
|
||||||
if (unlikely(xgene_enet_is_desc_slot_empty(raw_desc)))
|
if (unlikely(xgene_enet_is_desc_slot_empty(raw_desc)))
|
||||||
break;
|
break;
|
||||||
|
@ -559,18 +563,24 @@ static int xgene_enet_process_ring(struct xgene_enet_desc_ring *ring,
|
||||||
}
|
}
|
||||||
dma_rmb();
|
dma_rmb();
|
||||||
count++;
|
count++;
|
||||||
|
desc_count++;
|
||||||
}
|
}
|
||||||
if (is_rx_desc(raw_desc))
|
if (is_rx_desc(raw_desc)) {
|
||||||
ret = xgene_enet_rx_frame(ring, raw_desc);
|
ret = xgene_enet_rx_frame(ring, raw_desc);
|
||||||
else
|
} else {
|
||||||
ret = xgene_enet_tx_completion(ring, raw_desc);
|
ret = xgene_enet_tx_completion(ring, raw_desc);
|
||||||
|
is_completion = true;
|
||||||
|
}
|
||||||
xgene_enet_mark_desc_slot_empty(raw_desc);
|
xgene_enet_mark_desc_slot_empty(raw_desc);
|
||||||
if (exp_desc)
|
if (exp_desc)
|
||||||
xgene_enet_mark_desc_slot_empty(exp_desc);
|
xgene_enet_mark_desc_slot_empty(exp_desc);
|
||||||
|
|
||||||
head = (head + 1) & slots;
|
head = (head + 1) & slots;
|
||||||
count++;
|
count++;
|
||||||
|
desc_count++;
|
||||||
processed++;
|
processed++;
|
||||||
|
if (is_completion)
|
||||||
|
pdata->txc_level += desc_count;
|
||||||
|
|
||||||
if (ret)
|
if (ret)
|
||||||
break;
|
break;
|
||||||
|
@ -580,10 +590,8 @@ static int xgene_enet_process_ring(struct xgene_enet_desc_ring *ring,
|
||||||
pdata->ring_ops->wr_cmd(ring, -count);
|
pdata->ring_ops->wr_cmd(ring, -count);
|
||||||
ring->head = head;
|
ring->head = head;
|
||||||
|
|
||||||
if (netif_queue_stopped(ring->ndev)) {
|
if (netif_queue_stopped(ring->ndev))
|
||||||
if (pdata->ring_ops->len(ring) < pdata->cp_qcnt_low)
|
netif_start_queue(ring->ndev);
|
||||||
netif_wake_queue(ring->ndev);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return processed;
|
return processed;
|
||||||
|
@ -1033,9 +1041,7 @@ static int xgene_enet_create_desc_rings(struct net_device *ndev)
|
||||||
pdata->tx_ring->cp_ring = cp_ring;
|
pdata->tx_ring->cp_ring = cp_ring;
|
||||||
pdata->tx_ring->dst_ring_num = xgene_enet_dst_ring_num(cp_ring);
|
pdata->tx_ring->dst_ring_num = xgene_enet_dst_ring_num(cp_ring);
|
||||||
|
|
||||||
pdata->tx_qcnt_hi = pdata->tx_ring->slots / 2;
|
pdata->tx_qcnt_hi = pdata->tx_ring->slots - 128;
|
||||||
pdata->cp_qcnt_hi = pdata->rx_ring->slots / 2;
|
|
||||||
pdata->cp_qcnt_low = pdata->cp_qcnt_hi / 2;
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
|
|
@ -155,11 +155,11 @@ struct xgene_enet_pdata {
|
||||||
enum xgene_enet_id enet_id;
|
enum xgene_enet_id enet_id;
|
||||||
struct xgene_enet_desc_ring *tx_ring;
|
struct xgene_enet_desc_ring *tx_ring;
|
||||||
struct xgene_enet_desc_ring *rx_ring;
|
struct xgene_enet_desc_ring *rx_ring;
|
||||||
|
u16 tx_level;
|
||||||
|
u16 txc_level;
|
||||||
char *dev_name;
|
char *dev_name;
|
||||||
u32 rx_buff_cnt;
|
u32 rx_buff_cnt;
|
||||||
u32 tx_qcnt_hi;
|
u32 tx_qcnt_hi;
|
||||||
u32 cp_qcnt_hi;
|
|
||||||
u32 cp_qcnt_low;
|
|
||||||
u32 rx_irq;
|
u32 rx_irq;
|
||||||
u32 txc_irq;
|
u32 txc_irq;
|
||||||
u8 cq_cnt;
|
u8 cq_cnt;
|
||||||
|
|
|
@ -1016,13 +1016,12 @@ static int atl1c_setup_ring_resources(struct atl1c_adapter *adapter)
|
||||||
sizeof(struct atl1c_recv_ret_status) * rx_desc_count +
|
sizeof(struct atl1c_recv_ret_status) * rx_desc_count +
|
||||||
8 * 4;
|
8 * 4;
|
||||||
|
|
||||||
ring_header->desc = pci_alloc_consistent(pdev, ring_header->size,
|
ring_header->desc = dma_zalloc_coherent(&pdev->dev, ring_header->size,
|
||||||
&ring_header->dma);
|
&ring_header->dma, GFP_KERNEL);
|
||||||
if (unlikely(!ring_header->desc)) {
|
if (unlikely(!ring_header->desc)) {
|
||||||
dev_err(&pdev->dev, "pci_alloc_consistend failed\n");
|
dev_err(&pdev->dev, "could not get memory for DMA buffer\n");
|
||||||
goto err_nomem;
|
goto err_nomem;
|
||||||
}
|
}
|
||||||
memset(ring_header->desc, 0, ring_header->size);
|
|
||||||
/* init TPD ring */
|
/* init TPD ring */
|
||||||
|
|
||||||
tpd_ring[0].dma = roundup(ring_header->dma, 8);
|
tpd_ring[0].dma = roundup(ring_header->dma, 8);
|
||||||
|
|
|
@ -13,6 +13,7 @@ if NET_VENDOR_AURORA
|
||||||
|
|
||||||
config AURORA_NB8800
|
config AURORA_NB8800
|
||||||
tristate "Aurora AU-NB8800 support"
|
tristate "Aurora AU-NB8800 support"
|
||||||
|
depends on HAS_DMA
|
||||||
select PHYLIB
|
select PHYLIB
|
||||||
help
|
help
|
||||||
Support for the AU-NB8800 gigabit Ethernet controller.
|
Support for the AU-NB8800 gigabit Ethernet controller.
|
||||||
|
|
|
@ -2693,17 +2693,16 @@ static int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp)
|
||||||
req.ver_upd = DRV_VER_UPD;
|
req.ver_upd = DRV_VER_UPD;
|
||||||
|
|
||||||
if (BNXT_PF(bp)) {
|
if (BNXT_PF(bp)) {
|
||||||
unsigned long vf_req_snif_bmap[4];
|
DECLARE_BITMAP(vf_req_snif_bmap, 256);
|
||||||
u32 *data = (u32 *)vf_req_snif_bmap;
|
u32 *data = (u32 *)vf_req_snif_bmap;
|
||||||
|
|
||||||
memset(vf_req_snif_bmap, 0, 32);
|
memset(vf_req_snif_bmap, 0, sizeof(vf_req_snif_bmap));
|
||||||
for (i = 0; i < ARRAY_SIZE(bnxt_vf_req_snif); i++)
|
for (i = 0; i < ARRAY_SIZE(bnxt_vf_req_snif); i++)
|
||||||
__set_bit(bnxt_vf_req_snif[i], vf_req_snif_bmap);
|
__set_bit(bnxt_vf_req_snif[i], vf_req_snif_bmap);
|
||||||
|
|
||||||
for (i = 0; i < 8; i++) {
|
for (i = 0; i < 8; i++)
|
||||||
req.vf_req_fwd[i] = cpu_to_le32(*data);
|
req.vf_req_fwd[i] = cpu_to_le32(data[i]);
|
||||||
data++;
|
|
||||||
}
|
|
||||||
req.enables |=
|
req.enables |=
|
||||||
cpu_to_le32(FUNC_DRV_RGTR_REQ_ENABLES_VF_REQ_FWD);
|
cpu_to_le32(FUNC_DRV_RGTR_REQ_ENABLES_VF_REQ_FWD);
|
||||||
}
|
}
|
||||||
|
@ -4603,7 +4602,7 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
|
||||||
bp->nge_port_cnt = 1;
|
bp->nge_port_cnt = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
bp->state = BNXT_STATE_OPEN;
|
set_bit(BNXT_STATE_OPEN, &bp->state);
|
||||||
bnxt_enable_int(bp);
|
bnxt_enable_int(bp);
|
||||||
/* Enable TX queues */
|
/* Enable TX queues */
|
||||||
bnxt_tx_enable(bp);
|
bnxt_tx_enable(bp);
|
||||||
|
@ -4679,8 +4678,10 @@ int bnxt_close_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
|
||||||
/* Change device state to avoid TX queue wake up's */
|
/* Change device state to avoid TX queue wake up's */
|
||||||
bnxt_tx_disable(bp);
|
bnxt_tx_disable(bp);
|
||||||
|
|
||||||
bp->state = BNXT_STATE_CLOSED;
|
clear_bit(BNXT_STATE_OPEN, &bp->state);
|
||||||
cancel_work_sync(&bp->sp_task);
|
smp_mb__after_atomic();
|
||||||
|
while (test_bit(BNXT_STATE_IN_SP_TASK, &bp->state))
|
||||||
|
msleep(20);
|
||||||
|
|
||||||
/* Flush rings before disabling interrupts */
|
/* Flush rings before disabling interrupts */
|
||||||
bnxt_shutdown_nic(bp, irq_re_init);
|
bnxt_shutdown_nic(bp, irq_re_init);
|
||||||
|
@ -5030,8 +5031,10 @@ static void bnxt_dbg_dump_states(struct bnxt *bp)
|
||||||
static void bnxt_reset_task(struct bnxt *bp)
|
static void bnxt_reset_task(struct bnxt *bp)
|
||||||
{
|
{
|
||||||
bnxt_dbg_dump_states(bp);
|
bnxt_dbg_dump_states(bp);
|
||||||
if (netif_running(bp->dev))
|
if (netif_running(bp->dev)) {
|
||||||
bnxt_tx_disable(bp); /* prevent tx timout again */
|
bnxt_close_nic(bp, false, false);
|
||||||
|
bnxt_open_nic(bp, false, false);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void bnxt_tx_timeout(struct net_device *dev)
|
static void bnxt_tx_timeout(struct net_device *dev)
|
||||||
|
@ -5081,8 +5084,12 @@ static void bnxt_sp_task(struct work_struct *work)
|
||||||
struct bnxt *bp = container_of(work, struct bnxt, sp_task);
|
struct bnxt *bp = container_of(work, struct bnxt, sp_task);
|
||||||
int rc;
|
int rc;
|
||||||
|
|
||||||
if (bp->state != BNXT_STATE_OPEN)
|
set_bit(BNXT_STATE_IN_SP_TASK, &bp->state);
|
||||||
|
smp_mb__after_atomic();
|
||||||
|
if (!test_bit(BNXT_STATE_OPEN, &bp->state)) {
|
||||||
|
clear_bit(BNXT_STATE_IN_SP_TASK, &bp->state);
|
||||||
return;
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
if (test_and_clear_bit(BNXT_RX_MASK_SP_EVENT, &bp->sp_event))
|
if (test_and_clear_bit(BNXT_RX_MASK_SP_EVENT, &bp->sp_event))
|
||||||
bnxt_cfg_rx_mode(bp);
|
bnxt_cfg_rx_mode(bp);
|
||||||
|
@ -5106,8 +5113,19 @@ static void bnxt_sp_task(struct work_struct *work)
|
||||||
bnxt_hwrm_tunnel_dst_port_free(
|
bnxt_hwrm_tunnel_dst_port_free(
|
||||||
bp, TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN);
|
bp, TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN);
|
||||||
}
|
}
|
||||||
if (test_and_clear_bit(BNXT_RESET_TASK_SP_EVENT, &bp->sp_event))
|
if (test_and_clear_bit(BNXT_RESET_TASK_SP_EVENT, &bp->sp_event)) {
|
||||||
|
/* bnxt_reset_task() calls bnxt_close_nic() which waits
|
||||||
|
* for BNXT_STATE_IN_SP_TASK to clear.
|
||||||
|
*/
|
||||||
|
clear_bit(BNXT_STATE_IN_SP_TASK, &bp->state);
|
||||||
|
rtnl_lock();
|
||||||
bnxt_reset_task(bp);
|
bnxt_reset_task(bp);
|
||||||
|
set_bit(BNXT_STATE_IN_SP_TASK, &bp->state);
|
||||||
|
rtnl_unlock();
|
||||||
|
}
|
||||||
|
|
||||||
|
smp_mb__before_atomic();
|
||||||
|
clear_bit(BNXT_STATE_IN_SP_TASK, &bp->state);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int bnxt_init_board(struct pci_dev *pdev, struct net_device *dev)
|
static int bnxt_init_board(struct pci_dev *pdev, struct net_device *dev)
|
||||||
|
@ -5186,7 +5204,7 @@ static int bnxt_init_board(struct pci_dev *pdev, struct net_device *dev)
|
||||||
bp->timer.function = bnxt_timer;
|
bp->timer.function = bnxt_timer;
|
||||||
bp->current_interval = BNXT_TIMER_INTERVAL;
|
bp->current_interval = BNXT_TIMER_INTERVAL;
|
||||||
|
|
||||||
bp->state = BNXT_STATE_CLOSED;
|
clear_bit(BNXT_STATE_OPEN, &bp->state);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
|
|
@ -925,9 +925,9 @@ struct bnxt {
|
||||||
|
|
||||||
struct timer_list timer;
|
struct timer_list timer;
|
||||||
|
|
||||||
int state;
|
unsigned long state;
|
||||||
#define BNXT_STATE_CLOSED 0
|
#define BNXT_STATE_OPEN 0
|
||||||
#define BNXT_STATE_OPEN 1
|
#define BNXT_STATE_IN_SP_TASK 1
|
||||||
|
|
||||||
struct bnxt_irq *irq_tbl;
|
struct bnxt_irq *irq_tbl;
|
||||||
u8 mac_addr[ETH_ALEN];
|
u8 mac_addr[ETH_ALEN];
|
||||||
|
|
|
@ -21,7 +21,7 @@
|
||||||
#ifdef CONFIG_BNXT_SRIOV
|
#ifdef CONFIG_BNXT_SRIOV
|
||||||
static int bnxt_vf_ndo_prep(struct bnxt *bp, int vf_id)
|
static int bnxt_vf_ndo_prep(struct bnxt *bp, int vf_id)
|
||||||
{
|
{
|
||||||
if (bp->state != BNXT_STATE_OPEN) {
|
if (!test_bit(BNXT_STATE_OPEN, &bp->state)) {
|
||||||
netdev_err(bp->dev, "vf ndo called though PF is down\n");
|
netdev_err(bp->dev, "vf ndo called though PF is down\n");
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
|
@ -37,7 +37,6 @@ struct nicpf {
|
||||||
#define NIC_GET_BGX_FROM_VF_LMAC_MAP(map) ((map >> 4) & 0xF)
|
#define NIC_GET_BGX_FROM_VF_LMAC_MAP(map) ((map >> 4) & 0xF)
|
||||||
#define NIC_GET_LMAC_FROM_VF_LMAC_MAP(map) (map & 0xF)
|
#define NIC_GET_LMAC_FROM_VF_LMAC_MAP(map) (map & 0xF)
|
||||||
u8 vf_lmac_map[MAX_LMAC];
|
u8 vf_lmac_map[MAX_LMAC];
|
||||||
u8 lmac_cnt;
|
|
||||||
struct delayed_work dwork;
|
struct delayed_work dwork;
|
||||||
struct workqueue_struct *check_link;
|
struct workqueue_struct *check_link;
|
||||||
u8 link[MAX_LMAC];
|
u8 link[MAX_LMAC];
|
||||||
|
@ -280,7 +279,6 @@ static void nic_set_lmac_vf_mapping(struct nicpf *nic)
|
||||||
u64 lmac_credit;
|
u64 lmac_credit;
|
||||||
|
|
||||||
nic->num_vf_en = 0;
|
nic->num_vf_en = 0;
|
||||||
nic->lmac_cnt = 0;
|
|
||||||
|
|
||||||
for (bgx = 0; bgx < NIC_MAX_BGX; bgx++) {
|
for (bgx = 0; bgx < NIC_MAX_BGX; bgx++) {
|
||||||
if (!(bgx_map & (1 << bgx)))
|
if (!(bgx_map & (1 << bgx)))
|
||||||
|
@ -290,7 +288,6 @@ static void nic_set_lmac_vf_mapping(struct nicpf *nic)
|
||||||
nic->vf_lmac_map[next_bgx_lmac++] =
|
nic->vf_lmac_map[next_bgx_lmac++] =
|
||||||
NIC_SET_VF_LMAC_MAP(bgx, lmac);
|
NIC_SET_VF_LMAC_MAP(bgx, lmac);
|
||||||
nic->num_vf_en += lmac_cnt;
|
nic->num_vf_en += lmac_cnt;
|
||||||
nic->lmac_cnt += lmac_cnt;
|
|
||||||
|
|
||||||
/* Program LMAC credits */
|
/* Program LMAC credits */
|
||||||
lmac_credit = (1ull << 1); /* channel credit enable */
|
lmac_credit = (1ull << 1); /* channel credit enable */
|
||||||
|
@ -618,6 +615,21 @@ static int nic_config_loopback(struct nicpf *nic, struct set_loopback *lbk)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void nic_enable_vf(struct nicpf *nic, int vf, bool enable)
|
||||||
|
{
|
||||||
|
int bgx, lmac;
|
||||||
|
|
||||||
|
nic->vf_enabled[vf] = enable;
|
||||||
|
|
||||||
|
if (vf >= nic->num_vf_en)
|
||||||
|
return;
|
||||||
|
|
||||||
|
bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
|
||||||
|
lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
|
||||||
|
|
||||||
|
bgx_lmac_rx_tx_enable(nic->node, bgx, lmac, enable);
|
||||||
|
}
|
||||||
|
|
||||||
/* Interrupt handler to handle mailbox messages from VFs */
|
/* Interrupt handler to handle mailbox messages from VFs */
|
||||||
static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
|
static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
|
||||||
{
|
{
|
||||||
|
@ -717,29 +729,14 @@ static void nic_handle_mbx_intr(struct nicpf *nic, int vf)
|
||||||
break;
|
break;
|
||||||
case NIC_MBOX_MSG_CFG_DONE:
|
case NIC_MBOX_MSG_CFG_DONE:
|
||||||
/* Last message of VF config msg sequence */
|
/* Last message of VF config msg sequence */
|
||||||
nic->vf_enabled[vf] = true;
|
nic_enable_vf(nic, vf, true);
|
||||||
if (vf >= nic->lmac_cnt)
|
|
||||||
goto unlock;
|
|
||||||
|
|
||||||
bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
|
|
||||||
lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
|
|
||||||
|
|
||||||
bgx_lmac_rx_tx_enable(nic->node, bgx, lmac, true);
|
|
||||||
goto unlock;
|
goto unlock;
|
||||||
case NIC_MBOX_MSG_SHUTDOWN:
|
case NIC_MBOX_MSG_SHUTDOWN:
|
||||||
/* First msg in VF teardown sequence */
|
/* First msg in VF teardown sequence */
|
||||||
nic->vf_enabled[vf] = false;
|
|
||||||
if (vf >= nic->num_vf_en)
|
if (vf >= nic->num_vf_en)
|
||||||
nic->sqs_used[vf - nic->num_vf_en] = false;
|
nic->sqs_used[vf - nic->num_vf_en] = false;
|
||||||
nic->pqs_vf[vf] = 0;
|
nic->pqs_vf[vf] = 0;
|
||||||
|
nic_enable_vf(nic, vf, false);
|
||||||
if (vf >= nic->lmac_cnt)
|
|
||||||
break;
|
|
||||||
|
|
||||||
bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
|
|
||||||
lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]);
|
|
||||||
|
|
||||||
bgx_lmac_rx_tx_enable(nic->node, bgx, lmac, false);
|
|
||||||
break;
|
break;
|
||||||
case NIC_MBOX_MSG_ALLOC_SQS:
|
case NIC_MBOX_MSG_ALLOC_SQS:
|
||||||
nic_alloc_sqs(nic, &mbx.sqs_alloc);
|
nic_alloc_sqs(nic, &mbx.sqs_alloc);
|
||||||
|
@ -958,7 +955,7 @@ static void nic_poll_for_link(struct work_struct *work)
|
||||||
|
|
||||||
mbx.link_status.msg = NIC_MBOX_MSG_BGX_LINK_CHANGE;
|
mbx.link_status.msg = NIC_MBOX_MSG_BGX_LINK_CHANGE;
|
||||||
|
|
||||||
for (vf = 0; vf < nic->lmac_cnt; vf++) {
|
for (vf = 0; vf < nic->num_vf_en; vf++) {
|
||||||
/* Poll only if VF is UP */
|
/* Poll only if VF is UP */
|
||||||
if (!nic->vf_enabled[vf])
|
if (!nic->vf_enabled[vf])
|
||||||
continue;
|
continue;
|
||||||
|
|
|
@ -48,21 +48,15 @@ static void nps_enet_read_rx_fifo(struct net_device *ndev,
|
||||||
*reg = nps_enet_reg_get(priv, NPS_ENET_REG_RX_BUF);
|
*reg = nps_enet_reg_get(priv, NPS_ENET_REG_RX_BUF);
|
||||||
else { /* !dst_is_aligned */
|
else { /* !dst_is_aligned */
|
||||||
for (i = 0; i < len; i++, reg++) {
|
for (i = 0; i < len; i++, reg++) {
|
||||||
u32 buf =
|
u32 buf = nps_enet_reg_get(priv, NPS_ENET_REG_RX_BUF);
|
||||||
nps_enet_reg_get(priv, NPS_ENET_REG_RX_BUF);
|
put_unaligned(buf, reg);
|
||||||
|
|
||||||
/* to accommodate word-unaligned address of "reg"
|
|
||||||
* we have to do memcpy_toio() instead of simple "=".
|
|
||||||
*/
|
|
||||||
memcpy_toio((void __iomem *)reg, &buf, sizeof(buf));
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* copy last bytes (if any) */
|
/* copy last bytes (if any) */
|
||||||
if (last) {
|
if (last) {
|
||||||
u32 buf = nps_enet_reg_get(priv, NPS_ENET_REG_RX_BUF);
|
u32 buf = nps_enet_reg_get(priv, NPS_ENET_REG_RX_BUF);
|
||||||
|
memcpy((u8*)reg, &buf, last);
|
||||||
memcpy_toio((void __iomem *)reg, &buf, last);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -367,7 +361,7 @@ static void nps_enet_send_frame(struct net_device *ndev,
|
||||||
struct nps_enet_tx_ctl tx_ctrl;
|
struct nps_enet_tx_ctl tx_ctrl;
|
||||||
short length = skb->len;
|
short length = skb->len;
|
||||||
u32 i, len = DIV_ROUND_UP(length, sizeof(u32));
|
u32 i, len = DIV_ROUND_UP(length, sizeof(u32));
|
||||||
u32 *src = (u32 *)virt_to_phys(skb->data);
|
u32 *src = (void *)skb->data;
|
||||||
bool src_is_aligned = IS_ALIGNED((unsigned long)src, sizeof(u32));
|
bool src_is_aligned = IS_ALIGNED((unsigned long)src, sizeof(u32));
|
||||||
|
|
||||||
tx_ctrl.value = 0;
|
tx_ctrl.value = 0;
|
||||||
|
@ -375,17 +369,11 @@ static void nps_enet_send_frame(struct net_device *ndev,
|
||||||
if (src_is_aligned)
|
if (src_is_aligned)
|
||||||
for (i = 0; i < len; i++, src++)
|
for (i = 0; i < len; i++, src++)
|
||||||
nps_enet_reg_set(priv, NPS_ENET_REG_TX_BUF, *src);
|
nps_enet_reg_set(priv, NPS_ENET_REG_TX_BUF, *src);
|
||||||
else { /* !src_is_aligned */
|
else /* !src_is_aligned */
|
||||||
for (i = 0; i < len; i++, src++) {
|
for (i = 0; i < len; i++, src++)
|
||||||
u32 buf;
|
nps_enet_reg_set(priv, NPS_ENET_REG_TX_BUF,
|
||||||
|
get_unaligned(src));
|
||||||
|
|
||||||
/* to accommodate word-unaligned address of "src"
|
|
||||||
* we have to do memcpy_fromio() instead of simple "="
|
|
||||||
*/
|
|
||||||
memcpy_fromio(&buf, (void __iomem *)src, sizeof(buf));
|
|
||||||
nps_enet_reg_set(priv, NPS_ENET_REG_TX_BUF, buf);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
/* Write the length of the Frame */
|
/* Write the length of the Frame */
|
||||||
tx_ctrl.nt = length;
|
tx_ctrl.nt = length;
|
||||||
|
|
||||||
|
|
|
@ -552,7 +552,7 @@ static void tx_restart(struct net_device *dev)
|
||||||
cbd_t __iomem *prev_bd;
|
cbd_t __iomem *prev_bd;
|
||||||
cbd_t __iomem *last_tx_bd;
|
cbd_t __iomem *last_tx_bd;
|
||||||
|
|
||||||
last_tx_bd = fep->tx_bd_base + (fpi->tx_ring * sizeof(cbd_t));
|
last_tx_bd = fep->tx_bd_base + ((fpi->tx_ring - 1) * sizeof(cbd_t));
|
||||||
|
|
||||||
/* get the current bd held in TBPTR and scan back from this point */
|
/* get the current bd held in TBPTR and scan back from this point */
|
||||||
recheck_bd = curr_tbptr = (cbd_t __iomem *)
|
recheck_bd = curr_tbptr = (cbd_t __iomem *)
|
||||||
|
|
|
@ -464,7 +464,7 @@ static int fsl_pq_mdio_probe(struct platform_device *pdev)
|
||||||
* address). Print error message but continue anyway.
|
* address). Print error message but continue anyway.
|
||||||
*/
|
*/
|
||||||
if ((void *)tbipa > priv->map + resource_size(&res) - 4)
|
if ((void *)tbipa > priv->map + resource_size(&res) - 4)
|
||||||
dev_err(&pdev->dev, "invalid register map (should be at least 0x%04x to contain TBI address)\n",
|
dev_err(&pdev->dev, "invalid register map (should be at least 0x%04zx to contain TBI address)\n",
|
||||||
((void *)tbipa - priv->map) + 4);
|
((void *)tbipa - priv->map) + 4);
|
||||||
|
|
||||||
iowrite32be(be32_to_cpup(prop), tbipa);
|
iowrite32be(be32_to_cpup(prop), tbipa);
|
||||||
|
|
|
@ -894,7 +894,8 @@ static int gfar_of_init(struct platform_device *ofdev, struct net_device **pdev)
|
||||||
FSL_GIANFAR_DEV_HAS_VLAN |
|
FSL_GIANFAR_DEV_HAS_VLAN |
|
||||||
FSL_GIANFAR_DEV_HAS_MAGIC_PACKET |
|
FSL_GIANFAR_DEV_HAS_MAGIC_PACKET |
|
||||||
FSL_GIANFAR_DEV_HAS_EXTENDED_HASH |
|
FSL_GIANFAR_DEV_HAS_EXTENDED_HASH |
|
||||||
FSL_GIANFAR_DEV_HAS_TIMER;
|
FSL_GIANFAR_DEV_HAS_TIMER |
|
||||||
|
FSL_GIANFAR_DEV_HAS_RX_FILER;
|
||||||
|
|
||||||
err = of_property_read_string(np, "phy-connection-type", &ctype);
|
err = of_property_read_string(np, "phy-connection-type", &ctype);
|
||||||
|
|
||||||
|
@ -1396,8 +1397,9 @@ static int gfar_probe(struct platform_device *ofdev)
|
||||||
priv->rx_queue[i]->rxic = DEFAULT_RXIC;
|
priv->rx_queue[i]->rxic = DEFAULT_RXIC;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* always enable rx filer */
|
/* Always enable rx filer if available */
|
||||||
priv->rx_filer_enable = 1;
|
priv->rx_filer_enable =
|
||||||
|
(priv->device_flags & FSL_GIANFAR_DEV_HAS_RX_FILER) ? 1 : 0;
|
||||||
/* Enable most messages by default */
|
/* Enable most messages by default */
|
||||||
priv->msg_enable = (NETIF_MSG_IFUP << 1 ) - 1;
|
priv->msg_enable = (NETIF_MSG_IFUP << 1 ) - 1;
|
||||||
/* use pritority h/w tx queue scheduling for single queue devices */
|
/* use pritority h/w tx queue scheduling for single queue devices */
|
||||||
|
|
|
@ -923,6 +923,7 @@ struct gfar {
|
||||||
#define FSL_GIANFAR_DEV_HAS_BUF_STASHING 0x00000400
|
#define FSL_GIANFAR_DEV_HAS_BUF_STASHING 0x00000400
|
||||||
#define FSL_GIANFAR_DEV_HAS_TIMER 0x00000800
|
#define FSL_GIANFAR_DEV_HAS_TIMER 0x00000800
|
||||||
#define FSL_GIANFAR_DEV_HAS_WAKE_ON_FILER 0x00001000
|
#define FSL_GIANFAR_DEV_HAS_WAKE_ON_FILER 0x00001000
|
||||||
|
#define FSL_GIANFAR_DEV_HAS_RX_FILER 0x00002000
|
||||||
|
|
||||||
#if (MAXGROUPS == 2)
|
#if (MAXGROUPS == 2)
|
||||||
#define DEFAULT_MAPPING 0xAA
|
#define DEFAULT_MAPPING 0xAA
|
||||||
|
|
|
@ -1259,12 +1259,8 @@ int hns_dsaf_set_mac_uc_entry(
|
||||||
if (MAC_IS_ALL_ZEROS(mac_entry->addr) ||
|
if (MAC_IS_ALL_ZEROS(mac_entry->addr) ||
|
||||||
MAC_IS_BROADCAST(mac_entry->addr) ||
|
MAC_IS_BROADCAST(mac_entry->addr) ||
|
||||||
MAC_IS_MULTICAST(mac_entry->addr)) {
|
MAC_IS_MULTICAST(mac_entry->addr)) {
|
||||||
dev_err(dsaf_dev->dev,
|
dev_err(dsaf_dev->dev, "set_uc %s Mac %pM err!\n",
|
||||||
"set_uc %s Mac %02x:%02x:%02x:%02x:%02x:%02x err!\n",
|
dsaf_dev->ae_dev.name, mac_entry->addr);
|
||||||
dsaf_dev->ae_dev.name, mac_entry->addr[0],
|
|
||||||
mac_entry->addr[1], mac_entry->addr[2],
|
|
||||||
mac_entry->addr[3], mac_entry->addr[4],
|
|
||||||
mac_entry->addr[5]);
|
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1331,12 +1327,8 @@ int hns_dsaf_set_mac_mc_entry(
|
||||||
|
|
||||||
/* mac addr check */
|
/* mac addr check */
|
||||||
if (MAC_IS_ALL_ZEROS(mac_entry->addr)) {
|
if (MAC_IS_ALL_ZEROS(mac_entry->addr)) {
|
||||||
dev_err(dsaf_dev->dev,
|
dev_err(dsaf_dev->dev, "set uc %s Mac %pM err!\n",
|
||||||
"set uc %s Mac %02x:%02x:%02x:%02x:%02x:%02x err!\n",
|
dsaf_dev->ae_dev.name, mac_entry->addr);
|
||||||
dsaf_dev->ae_dev.name, mac_entry->addr[0],
|
|
||||||
mac_entry->addr[1], mac_entry->addr[2],
|
|
||||||
mac_entry->addr[3],
|
|
||||||
mac_entry->addr[4], mac_entry->addr[5]);
|
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1410,11 +1402,8 @@ int hns_dsaf_add_mac_mc_port(struct dsaf_device *dsaf_dev,
|
||||||
|
|
||||||
/*chechk mac addr */
|
/*chechk mac addr */
|
||||||
if (MAC_IS_ALL_ZEROS(mac_entry->addr)) {
|
if (MAC_IS_ALL_ZEROS(mac_entry->addr)) {
|
||||||
dev_err(dsaf_dev->dev,
|
dev_err(dsaf_dev->dev, "set_entry failed,addr %pM!\n",
|
||||||
"set_entry failed,addr %02x:%02x:%02x:%02x:%02x:%02x!\n",
|
mac_entry->addr);
|
||||||
mac_entry->addr[0], mac_entry->addr[1],
|
|
||||||
mac_entry->addr[2], mac_entry->addr[3],
|
|
||||||
mac_entry->addr[4], mac_entry->addr[5]);
|
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1497,9 +1486,8 @@ int hns_dsaf_del_mac_entry(struct dsaf_device *dsaf_dev, u16 vlan_id,
|
||||||
|
|
||||||
/*check mac addr */
|
/*check mac addr */
|
||||||
if (MAC_IS_ALL_ZEROS(addr) || MAC_IS_BROADCAST(addr)) {
|
if (MAC_IS_ALL_ZEROS(addr) || MAC_IS_BROADCAST(addr)) {
|
||||||
dev_err(dsaf_dev->dev,
|
dev_err(dsaf_dev->dev, "del_entry failed,addr %pM!\n",
|
||||||
"del_entry failed,addr %02x:%02x:%02x:%02x:%02x:%02x!\n",
|
addr);
|
||||||
addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]);
|
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1563,11 +1551,8 @@ int hns_dsaf_del_mac_mc_port(struct dsaf_device *dsaf_dev,
|
||||||
|
|
||||||
/*check mac addr */
|
/*check mac addr */
|
||||||
if (MAC_IS_ALL_ZEROS(mac_entry->addr)) {
|
if (MAC_IS_ALL_ZEROS(mac_entry->addr)) {
|
||||||
dev_err(dsaf_dev->dev,
|
dev_err(dsaf_dev->dev, "del_port failed, addr %pM!\n",
|
||||||
"del_port failed, addr %02x:%02x:%02x:%02x:%02x:%02x!\n",
|
mac_entry->addr);
|
||||||
mac_entry->addr[0], mac_entry->addr[1],
|
|
||||||
mac_entry->addr[2], mac_entry->addr[3],
|
|
||||||
mac_entry->addr[4], mac_entry->addr[5]);
|
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1644,11 +1629,8 @@ int hns_dsaf_get_mac_uc_entry(struct dsaf_device *dsaf_dev,
|
||||||
/* check macaddr */
|
/* check macaddr */
|
||||||
if (MAC_IS_ALL_ZEROS(mac_entry->addr) ||
|
if (MAC_IS_ALL_ZEROS(mac_entry->addr) ||
|
||||||
MAC_IS_BROADCAST(mac_entry->addr)) {
|
MAC_IS_BROADCAST(mac_entry->addr)) {
|
||||||
dev_err(dsaf_dev->dev,
|
dev_err(dsaf_dev->dev, "get_entry failed,addr %pM\n",
|
||||||
"get_entry failed,addr %02x:%02x:%02x:%02x:%02x:%02x\n",
|
mac_entry->addr);
|
||||||
mac_entry->addr[0], mac_entry->addr[1],
|
|
||||||
mac_entry->addr[2], mac_entry->addr[3],
|
|
||||||
mac_entry->addr[4], mac_entry->addr[5]);
|
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1695,11 +1677,8 @@ int hns_dsaf_get_mac_mc_entry(struct dsaf_device *dsaf_dev,
|
||||||
/*check mac addr */
|
/*check mac addr */
|
||||||
if (MAC_IS_ALL_ZEROS(mac_entry->addr) ||
|
if (MAC_IS_ALL_ZEROS(mac_entry->addr) ||
|
||||||
MAC_IS_BROADCAST(mac_entry->addr)) {
|
MAC_IS_BROADCAST(mac_entry->addr)) {
|
||||||
dev_err(dsaf_dev->dev,
|
dev_err(dsaf_dev->dev, "get_entry failed,addr %pM\n",
|
||||||
"get_entry failed,addr %02x:%02x:%02x:%02x:%02x:%02x\n",
|
mac_entry->addr);
|
||||||
mac_entry->addr[0], mac_entry->addr[1],
|
|
||||||
mac_entry->addr[2], mac_entry->addr[3],
|
|
||||||
mac_entry->addr[4], mac_entry->addr[5]);
|
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -898,7 +898,7 @@
|
||||||
#define XGMAC_PAUSE_CTL_RSP_MODE_B 2
|
#define XGMAC_PAUSE_CTL_RSP_MODE_B 2
|
||||||
#define XGMAC_PAUSE_CTL_TX_XOFF_B 3
|
#define XGMAC_PAUSE_CTL_TX_XOFF_B 3
|
||||||
|
|
||||||
static inline void dsaf_write_reg(void *base, u32 reg, u32 value)
|
static inline void dsaf_write_reg(void __iomem *base, u32 reg, u32 value)
|
||||||
{
|
{
|
||||||
u8 __iomem *reg_addr = ACCESS_ONCE(base);
|
u8 __iomem *reg_addr = ACCESS_ONCE(base);
|
||||||
|
|
||||||
|
@ -908,7 +908,7 @@ static inline void dsaf_write_reg(void *base, u32 reg, u32 value)
|
||||||
#define dsaf_write_dev(a, reg, value) \
|
#define dsaf_write_dev(a, reg, value) \
|
||||||
dsaf_write_reg((a)->io_base, (reg), (value))
|
dsaf_write_reg((a)->io_base, (reg), (value))
|
||||||
|
|
||||||
static inline u32 dsaf_read_reg(u8 *base, u32 reg)
|
static inline u32 dsaf_read_reg(u8 __iomem *base, u32 reg)
|
||||||
{
|
{
|
||||||
u8 __iomem *reg_addr = ACCESS_ONCE(base);
|
u8 __iomem *reg_addr = ACCESS_ONCE(base);
|
||||||
|
|
||||||
|
@ -927,8 +927,8 @@ static inline u32 dsaf_read_reg(u8 *base, u32 reg)
|
||||||
#define dsaf_set_bit(origin, shift, val) \
|
#define dsaf_set_bit(origin, shift, val) \
|
||||||
dsaf_set_field((origin), (1ull << (shift)), (shift), (val))
|
dsaf_set_field((origin), (1ull << (shift)), (shift), (val))
|
||||||
|
|
||||||
static inline void dsaf_set_reg_field(void *base, u32 reg, u32 mask, u32 shift,
|
static inline void dsaf_set_reg_field(void __iomem *base, u32 reg, u32 mask,
|
||||||
u32 val)
|
u32 shift, u32 val)
|
||||||
{
|
{
|
||||||
u32 origin = dsaf_read_reg(base, reg);
|
u32 origin = dsaf_read_reg(base, reg);
|
||||||
|
|
||||||
|
@ -947,7 +947,8 @@ static inline void dsaf_set_reg_field(void *base, u32 reg, u32 mask, u32 shift,
|
||||||
#define dsaf_get_bit(origin, shift) \
|
#define dsaf_get_bit(origin, shift) \
|
||||||
dsaf_get_field((origin), (1ull << (shift)), (shift))
|
dsaf_get_field((origin), (1ull << (shift)), (shift))
|
||||||
|
|
||||||
static inline u32 dsaf_get_reg_field(void *base, u32 reg, u32 mask, u32 shift)
|
static inline u32 dsaf_get_reg_field(void __iomem *base, u32 reg, u32 mask,
|
||||||
|
u32 shift)
|
||||||
{
|
{
|
||||||
u32 origin;
|
u32 origin;
|
||||||
|
|
||||||
|
|
|
@ -567,10 +567,6 @@ i40e_status i40e_init_adminq(struct i40e_hw *hw)
|
||||||
goto init_adminq_exit;
|
goto init_adminq_exit;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* initialize locks */
|
|
||||||
mutex_init(&hw->aq.asq_mutex);
|
|
||||||
mutex_init(&hw->aq.arq_mutex);
|
|
||||||
|
|
||||||
/* Set up register offsets */
|
/* Set up register offsets */
|
||||||
i40e_adminq_init_regs(hw);
|
i40e_adminq_init_regs(hw);
|
||||||
|
|
||||||
|
@ -664,8 +660,6 @@ i40e_status i40e_shutdown_adminq(struct i40e_hw *hw)
|
||||||
i40e_shutdown_asq(hw);
|
i40e_shutdown_asq(hw);
|
||||||
i40e_shutdown_arq(hw);
|
i40e_shutdown_arq(hw);
|
||||||
|
|
||||||
/* destroy the locks */
|
|
||||||
|
|
||||||
if (hw->nvm_buff.va)
|
if (hw->nvm_buff.va)
|
||||||
i40e_free_virt_mem(hw, &hw->nvm_buff);
|
i40e_free_virt_mem(hw, &hw->nvm_buff);
|
||||||
|
|
||||||
|
|
|
@ -10295,6 +10295,12 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
||||||
/* set up a default setting for link flow control */
|
/* set up a default setting for link flow control */
|
||||||
pf->hw.fc.requested_mode = I40E_FC_NONE;
|
pf->hw.fc.requested_mode = I40E_FC_NONE;
|
||||||
|
|
||||||
|
/* set up the locks for the AQ, do this only once in probe
|
||||||
|
* and destroy them only once in remove
|
||||||
|
*/
|
||||||
|
mutex_init(&hw->aq.asq_mutex);
|
||||||
|
mutex_init(&hw->aq.arq_mutex);
|
||||||
|
|
||||||
err = i40e_init_adminq(hw);
|
err = i40e_init_adminq(hw);
|
||||||
|
|
||||||
/* provide nvm, fw, api versions */
|
/* provide nvm, fw, api versions */
|
||||||
|
@ -10697,7 +10703,6 @@ static void i40e_remove(struct pci_dev *pdev)
|
||||||
set_bit(__I40E_DOWN, &pf->state);
|
set_bit(__I40E_DOWN, &pf->state);
|
||||||
del_timer_sync(&pf->service_timer);
|
del_timer_sync(&pf->service_timer);
|
||||||
cancel_work_sync(&pf->service_task);
|
cancel_work_sync(&pf->service_task);
|
||||||
i40e_fdir_teardown(pf);
|
|
||||||
|
|
||||||
if (pf->flags & I40E_FLAG_SRIOV_ENABLED) {
|
if (pf->flags & I40E_FLAG_SRIOV_ENABLED) {
|
||||||
i40e_free_vfs(pf);
|
i40e_free_vfs(pf);
|
||||||
|
@ -10740,6 +10745,10 @@ static void i40e_remove(struct pci_dev *pdev)
|
||||||
"Failed to destroy the Admin Queue resources: %d\n",
|
"Failed to destroy the Admin Queue resources: %d\n",
|
||||||
ret_code);
|
ret_code);
|
||||||
|
|
||||||
|
/* destroy the locks only once, here */
|
||||||
|
mutex_destroy(&hw->aq.arq_mutex);
|
||||||
|
mutex_destroy(&hw->aq.asq_mutex);
|
||||||
|
|
||||||
/* Clear all dynamic memory lists of rings, q_vectors, and VSIs */
|
/* Clear all dynamic memory lists of rings, q_vectors, and VSIs */
|
||||||
i40e_clear_interrupt_scheme(pf);
|
i40e_clear_interrupt_scheme(pf);
|
||||||
for (i = 0; i < pf->num_alloc_vsi; i++) {
|
for (i = 0; i < pf->num_alloc_vsi; i++) {
|
||||||
|
|
|
@ -551,10 +551,6 @@ i40e_status i40evf_init_adminq(struct i40e_hw *hw)
|
||||||
goto init_adminq_exit;
|
goto init_adminq_exit;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* initialize locks */
|
|
||||||
mutex_init(&hw->aq.asq_mutex);
|
|
||||||
mutex_init(&hw->aq.arq_mutex);
|
|
||||||
|
|
||||||
/* Set up register offsets */
|
/* Set up register offsets */
|
||||||
i40e_adminq_init_regs(hw);
|
i40e_adminq_init_regs(hw);
|
||||||
|
|
||||||
|
@ -596,8 +592,6 @@ i40e_status i40evf_shutdown_adminq(struct i40e_hw *hw)
|
||||||
i40e_shutdown_asq(hw);
|
i40e_shutdown_asq(hw);
|
||||||
i40e_shutdown_arq(hw);
|
i40e_shutdown_arq(hw);
|
||||||
|
|
||||||
/* destroy the locks */
|
|
||||||
|
|
||||||
if (hw->nvm_buff.va)
|
if (hw->nvm_buff.va)
|
||||||
i40e_free_virt_mem(hw, &hw->nvm_buff);
|
i40e_free_virt_mem(hw, &hw->nvm_buff);
|
||||||
|
|
||||||
|
|
|
@ -2476,6 +2476,12 @@ static int i40evf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
||||||
hw->bus.device = PCI_SLOT(pdev->devfn);
|
hw->bus.device = PCI_SLOT(pdev->devfn);
|
||||||
hw->bus.func = PCI_FUNC(pdev->devfn);
|
hw->bus.func = PCI_FUNC(pdev->devfn);
|
||||||
|
|
||||||
|
/* set up the locks for the AQ, do this only once in probe
|
||||||
|
* and destroy them only once in remove
|
||||||
|
*/
|
||||||
|
mutex_init(&hw->aq.asq_mutex);
|
||||||
|
mutex_init(&hw->aq.arq_mutex);
|
||||||
|
|
||||||
INIT_LIST_HEAD(&adapter->mac_filter_list);
|
INIT_LIST_HEAD(&adapter->mac_filter_list);
|
||||||
INIT_LIST_HEAD(&adapter->vlan_filter_list);
|
INIT_LIST_HEAD(&adapter->vlan_filter_list);
|
||||||
|
|
||||||
|
@ -2629,6 +2635,10 @@ static void i40evf_remove(struct pci_dev *pdev)
|
||||||
if (hw->aq.asq.count)
|
if (hw->aq.asq.count)
|
||||||
i40evf_shutdown_adminq(hw);
|
i40evf_shutdown_adminq(hw);
|
||||||
|
|
||||||
|
/* destroy the locks only once, here */
|
||||||
|
mutex_destroy(&hw->aq.arq_mutex);
|
||||||
|
mutex_destroy(&hw->aq.asq_mutex);
|
||||||
|
|
||||||
iounmap(hw->hw_addr);
|
iounmap(hw->hw_addr);
|
||||||
pci_release_regions(pdev);
|
pci_release_regions(pdev);
|
||||||
|
|
||||||
|
|
|
@ -7920,6 +7920,9 @@ int ixgbe_setup_tc(struct net_device *dev, u8 tc)
|
||||||
*/
|
*/
|
||||||
if (netif_running(dev))
|
if (netif_running(dev))
|
||||||
ixgbe_close(dev);
|
ixgbe_close(dev);
|
||||||
|
else
|
||||||
|
ixgbe_reset(adapter);
|
||||||
|
|
||||||
ixgbe_clear_interrupt_scheme(adapter);
|
ixgbe_clear_interrupt_scheme(adapter);
|
||||||
|
|
||||||
#ifdef CONFIG_IXGBE_DCB
|
#ifdef CONFIG_IXGBE_DCB
|
||||||
|
|
|
@ -3413,16 +3413,23 @@ static void mvpp2_bm_pool_bufsize_set(struct mvpp2 *priv,
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Free all buffers from the pool */
|
/* Free all buffers from the pool */
|
||||||
static void mvpp2_bm_bufs_free(struct mvpp2 *priv, struct mvpp2_bm_pool *bm_pool)
|
static void mvpp2_bm_bufs_free(struct device *dev, struct mvpp2 *priv,
|
||||||
|
struct mvpp2_bm_pool *bm_pool)
|
||||||
{
|
{
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
for (i = 0; i < bm_pool->buf_num; i++) {
|
for (i = 0; i < bm_pool->buf_num; i++) {
|
||||||
|
dma_addr_t buf_phys_addr;
|
||||||
u32 vaddr;
|
u32 vaddr;
|
||||||
|
|
||||||
/* Get buffer virtual address (indirect access) */
|
/* Get buffer virtual address (indirect access) */
|
||||||
mvpp2_read(priv, MVPP2_BM_PHY_ALLOC_REG(bm_pool->id));
|
buf_phys_addr = mvpp2_read(priv,
|
||||||
|
MVPP2_BM_PHY_ALLOC_REG(bm_pool->id));
|
||||||
vaddr = mvpp2_read(priv, MVPP2_BM_VIRT_ALLOC_REG);
|
vaddr = mvpp2_read(priv, MVPP2_BM_VIRT_ALLOC_REG);
|
||||||
|
|
||||||
|
dma_unmap_single(dev, buf_phys_addr,
|
||||||
|
bm_pool->buf_size, DMA_FROM_DEVICE);
|
||||||
|
|
||||||
if (!vaddr)
|
if (!vaddr)
|
||||||
break;
|
break;
|
||||||
dev_kfree_skb_any((struct sk_buff *)vaddr);
|
dev_kfree_skb_any((struct sk_buff *)vaddr);
|
||||||
|
@ -3439,7 +3446,7 @@ static int mvpp2_bm_pool_destroy(struct platform_device *pdev,
|
||||||
{
|
{
|
||||||
u32 val;
|
u32 val;
|
||||||
|
|
||||||
mvpp2_bm_bufs_free(priv, bm_pool);
|
mvpp2_bm_bufs_free(&pdev->dev, priv, bm_pool);
|
||||||
if (bm_pool->buf_num) {
|
if (bm_pool->buf_num) {
|
||||||
WARN(1, "cannot free all buffers in pool %d\n", bm_pool->id);
|
WARN(1, "cannot free all buffers in pool %d\n", bm_pool->id);
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -3692,7 +3699,8 @@ mvpp2_bm_pool_use(struct mvpp2_port *port, int pool, enum mvpp2_bm_type type,
|
||||||
MVPP2_BM_LONG_BUF_NUM :
|
MVPP2_BM_LONG_BUF_NUM :
|
||||||
MVPP2_BM_SHORT_BUF_NUM;
|
MVPP2_BM_SHORT_BUF_NUM;
|
||||||
else
|
else
|
||||||
mvpp2_bm_bufs_free(port->priv, new_pool);
|
mvpp2_bm_bufs_free(port->dev->dev.parent,
|
||||||
|
port->priv, new_pool);
|
||||||
|
|
||||||
new_pool->pkt_size = pkt_size;
|
new_pool->pkt_size = pkt_size;
|
||||||
|
|
||||||
|
@ -3756,7 +3764,7 @@ static int mvpp2_bm_update_mtu(struct net_device *dev, int mtu)
|
||||||
int pkt_size = MVPP2_RX_PKT_SIZE(mtu);
|
int pkt_size = MVPP2_RX_PKT_SIZE(mtu);
|
||||||
|
|
||||||
/* Update BM pool with new buffer size */
|
/* Update BM pool with new buffer size */
|
||||||
mvpp2_bm_bufs_free(port->priv, port_pool);
|
mvpp2_bm_bufs_free(dev->dev.parent, port->priv, port_pool);
|
||||||
if (port_pool->buf_num) {
|
if (port_pool->buf_num) {
|
||||||
WARN(1, "cannot free all buffers in pool %d\n", port_pool->id);
|
WARN(1, "cannot free all buffers in pool %d\n", port_pool->id);
|
||||||
return -EIO;
|
return -EIO;
|
||||||
|
@ -4401,11 +4409,10 @@ static void mvpp2_txq_bufs_free(struct mvpp2_port *port,
|
||||||
|
|
||||||
mvpp2_txq_inc_get(txq_pcpu);
|
mvpp2_txq_inc_get(txq_pcpu);
|
||||||
|
|
||||||
if (!skb)
|
|
||||||
continue;
|
|
||||||
|
|
||||||
dma_unmap_single(port->dev->dev.parent, buf_phys_addr,
|
dma_unmap_single(port->dev->dev.parent, buf_phys_addr,
|
||||||
skb_headlen(skb), DMA_TO_DEVICE);
|
skb_headlen(skb), DMA_TO_DEVICE);
|
||||||
|
if (!skb)
|
||||||
|
continue;
|
||||||
dev_kfree_skb_any(skb);
|
dev_kfree_skb_any(skb);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -5092,7 +5099,8 @@ static int mvpp2_rx(struct mvpp2_port *port, int rx_todo,
|
||||||
struct mvpp2_rx_queue *rxq)
|
struct mvpp2_rx_queue *rxq)
|
||||||
{
|
{
|
||||||
struct net_device *dev = port->dev;
|
struct net_device *dev = port->dev;
|
||||||
int rx_received, rx_filled, i;
|
int rx_received;
|
||||||
|
int rx_done = 0;
|
||||||
u32 rcvd_pkts = 0;
|
u32 rcvd_pkts = 0;
|
||||||
u32 rcvd_bytes = 0;
|
u32 rcvd_bytes = 0;
|
||||||
|
|
||||||
|
@ -5101,17 +5109,18 @@ static int mvpp2_rx(struct mvpp2_port *port, int rx_todo,
|
||||||
if (rx_todo > rx_received)
|
if (rx_todo > rx_received)
|
||||||
rx_todo = rx_received;
|
rx_todo = rx_received;
|
||||||
|
|
||||||
rx_filled = 0;
|
while (rx_done < rx_todo) {
|
||||||
for (i = 0; i < rx_todo; i++) {
|
|
||||||
struct mvpp2_rx_desc *rx_desc = mvpp2_rxq_next_desc_get(rxq);
|
struct mvpp2_rx_desc *rx_desc = mvpp2_rxq_next_desc_get(rxq);
|
||||||
struct mvpp2_bm_pool *bm_pool;
|
struct mvpp2_bm_pool *bm_pool;
|
||||||
struct sk_buff *skb;
|
struct sk_buff *skb;
|
||||||
|
dma_addr_t phys_addr;
|
||||||
u32 bm, rx_status;
|
u32 bm, rx_status;
|
||||||
int pool, rx_bytes, err;
|
int pool, rx_bytes, err;
|
||||||
|
|
||||||
rx_filled++;
|
rx_done++;
|
||||||
rx_status = rx_desc->status;
|
rx_status = rx_desc->status;
|
||||||
rx_bytes = rx_desc->data_size - MVPP2_MH_SIZE;
|
rx_bytes = rx_desc->data_size - MVPP2_MH_SIZE;
|
||||||
|
phys_addr = rx_desc->buf_phys_addr;
|
||||||
|
|
||||||
bm = mvpp2_bm_cookie_build(rx_desc);
|
bm = mvpp2_bm_cookie_build(rx_desc);
|
||||||
pool = mvpp2_bm_cookie_pool_get(bm);
|
pool = mvpp2_bm_cookie_pool_get(bm);
|
||||||
|
@ -5128,8 +5137,10 @@ static int mvpp2_rx(struct mvpp2_port *port, int rx_todo,
|
||||||
* comprised by the RX descriptor.
|
* comprised by the RX descriptor.
|
||||||
*/
|
*/
|
||||||
if (rx_status & MVPP2_RXD_ERR_SUMMARY) {
|
if (rx_status & MVPP2_RXD_ERR_SUMMARY) {
|
||||||
|
err_drop_frame:
|
||||||
dev->stats.rx_errors++;
|
dev->stats.rx_errors++;
|
||||||
mvpp2_rx_error(port, rx_desc);
|
mvpp2_rx_error(port, rx_desc);
|
||||||
|
/* Return the buffer to the pool */
|
||||||
mvpp2_pool_refill(port, bm, rx_desc->buf_phys_addr,
|
mvpp2_pool_refill(port, bm, rx_desc->buf_phys_addr,
|
||||||
rx_desc->buf_cookie);
|
rx_desc->buf_cookie);
|
||||||
continue;
|
continue;
|
||||||
|
@ -5137,6 +5148,15 @@ static int mvpp2_rx(struct mvpp2_port *port, int rx_todo,
|
||||||
|
|
||||||
skb = (struct sk_buff *)rx_desc->buf_cookie;
|
skb = (struct sk_buff *)rx_desc->buf_cookie;
|
||||||
|
|
||||||
|
err = mvpp2_rx_refill(port, bm_pool, bm, 0);
|
||||||
|
if (err) {
|
||||||
|
netdev_err(port->dev, "failed to refill BM pools\n");
|
||||||
|
goto err_drop_frame;
|
||||||
|
}
|
||||||
|
|
||||||
|
dma_unmap_single(dev->dev.parent, phys_addr,
|
||||||
|
bm_pool->buf_size, DMA_FROM_DEVICE);
|
||||||
|
|
||||||
rcvd_pkts++;
|
rcvd_pkts++;
|
||||||
rcvd_bytes += rx_bytes;
|
rcvd_bytes += rx_bytes;
|
||||||
atomic_inc(&bm_pool->in_use);
|
atomic_inc(&bm_pool->in_use);
|
||||||
|
@ -5147,12 +5167,6 @@ static int mvpp2_rx(struct mvpp2_port *port, int rx_todo,
|
||||||
mvpp2_rx_csum(port, rx_status, skb);
|
mvpp2_rx_csum(port, rx_status, skb);
|
||||||
|
|
||||||
napi_gro_receive(&port->napi, skb);
|
napi_gro_receive(&port->napi, skb);
|
||||||
|
|
||||||
err = mvpp2_rx_refill(port, bm_pool, bm, 0);
|
|
||||||
if (err) {
|
|
||||||
netdev_err(port->dev, "failed to refill BM pools\n");
|
|
||||||
rx_filled--;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (rcvd_pkts) {
|
if (rcvd_pkts) {
|
||||||
|
@ -5166,7 +5180,7 @@ static int mvpp2_rx(struct mvpp2_port *port, int rx_todo,
|
||||||
|
|
||||||
/* Update Rx queue management counters */
|
/* Update Rx queue management counters */
|
||||||
wmb();
|
wmb();
|
||||||
mvpp2_rxq_status_update(port, rxq->id, rx_todo, rx_filled);
|
mvpp2_rxq_status_update(port, rxq->id, rx_done, rx_done);
|
||||||
|
|
||||||
return rx_todo;
|
return rx_todo;
|
||||||
}
|
}
|
||||||
|
|
|
@ -4306,9 +4306,10 @@ int mlx4_QP_FLOW_STEERING_ATTACH_wrapper(struct mlx4_dev *dev, int slave,
|
||||||
return -EOPNOTSUPP;
|
return -EOPNOTSUPP;
|
||||||
|
|
||||||
ctrl = (struct mlx4_net_trans_rule_hw_ctrl *)inbox->buf;
|
ctrl = (struct mlx4_net_trans_rule_hw_ctrl *)inbox->buf;
|
||||||
ctrl->port = mlx4_slave_convert_port(dev, slave, ctrl->port);
|
err = mlx4_slave_convert_port(dev, slave, ctrl->port);
|
||||||
if (ctrl->port <= 0)
|
if (err <= 0)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
ctrl->port = err;
|
||||||
qpn = be32_to_cpu(ctrl->qpn) & 0xffffff;
|
qpn = be32_to_cpu(ctrl->qpn) & 0xffffff;
|
||||||
err = get_res(dev, slave, qpn, RES_QP, &rqp);
|
err = get_res(dev, slave, qpn, RES_QP, &rqp);
|
||||||
if (err) {
|
if (err) {
|
||||||
|
|
|
@ -299,6 +299,7 @@ struct qed_hwfn {
|
||||||
|
|
||||||
/* Flag indicating whether interrupts are enabled or not*/
|
/* Flag indicating whether interrupts are enabled or not*/
|
||||||
bool b_int_enabled;
|
bool b_int_enabled;
|
||||||
|
bool b_int_requested;
|
||||||
|
|
||||||
struct qed_mcp_info *mcp_info;
|
struct qed_mcp_info *mcp_info;
|
||||||
|
|
||||||
|
@ -491,6 +492,8 @@ u32 qed_unzip_data(struct qed_hwfn *p_hwfn,
|
||||||
u32 input_len, u8 *input_buf,
|
u32 input_len, u8 *input_buf,
|
||||||
u32 max_size, u8 *unzip_buf);
|
u32 max_size, u8 *unzip_buf);
|
||||||
|
|
||||||
|
int qed_slowpath_irq_req(struct qed_hwfn *hwfn);
|
||||||
|
|
||||||
#define QED_ETH_INTERFACE_VERSION 300
|
#define QED_ETH_INTERFACE_VERSION 300
|
||||||
|
|
||||||
#endif /* _QED_H */
|
#endif /* _QED_H */
|
||||||
|
|
|
@ -1385,52 +1385,63 @@ static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn,
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
static u32 qed_hw_bar_size(struct qed_dev *cdev,
|
static u32 qed_hw_bar_size(struct qed_hwfn *p_hwfn,
|
||||||
u8 bar_id)
|
u8 bar_id)
|
||||||
{
|
{
|
||||||
u32 size = pci_resource_len(cdev->pdev, (bar_id > 0) ? 2 : 0);
|
u32 bar_reg = (bar_id == 0 ? PGLUE_B_REG_PF_BAR0_SIZE
|
||||||
|
: PGLUE_B_REG_PF_BAR1_SIZE);
|
||||||
|
u32 val = qed_rd(p_hwfn, p_hwfn->p_main_ptt, bar_reg);
|
||||||
|
|
||||||
return size / cdev->num_hwfns;
|
/* Get the BAR size(in KB) from hardware given val */
|
||||||
|
return 1 << (val + 15);
|
||||||
}
|
}
|
||||||
|
|
||||||
int qed_hw_prepare(struct qed_dev *cdev,
|
int qed_hw_prepare(struct qed_dev *cdev,
|
||||||
int personality)
|
int personality)
|
||||||
{
|
{
|
||||||
int rc, i;
|
struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev);
|
||||||
|
int rc;
|
||||||
|
|
||||||
/* Store the precompiled init data ptrs */
|
/* Store the precompiled init data ptrs */
|
||||||
qed_init_iro_array(cdev);
|
qed_init_iro_array(cdev);
|
||||||
|
|
||||||
/* Initialize the first hwfn - will learn number of hwfns */
|
/* Initialize the first hwfn - will learn number of hwfns */
|
||||||
rc = qed_hw_prepare_single(&cdev->hwfns[0], cdev->regview,
|
rc = qed_hw_prepare_single(p_hwfn,
|
||||||
|
cdev->regview,
|
||||||
cdev->doorbells, personality);
|
cdev->doorbells, personality);
|
||||||
if (rc)
|
if (rc)
|
||||||
return rc;
|
return rc;
|
||||||
|
|
||||||
personality = cdev->hwfns[0].hw_info.personality;
|
personality = p_hwfn->hw_info.personality;
|
||||||
|
|
||||||
/* Initialize the rest of the hwfns */
|
/* Initialize the rest of the hwfns */
|
||||||
for (i = 1; i < cdev->num_hwfns; i++) {
|
if (cdev->num_hwfns > 1) {
|
||||||
void __iomem *p_regview, *p_doorbell;
|
void __iomem *p_regview, *p_doorbell;
|
||||||
|
u8 __iomem *addr;
|
||||||
|
|
||||||
p_regview = cdev->regview +
|
/* adjust bar offset for second engine */
|
||||||
i * qed_hw_bar_size(cdev, 0);
|
addr = cdev->regview + qed_hw_bar_size(p_hwfn, 0) / 2;
|
||||||
p_doorbell = cdev->doorbells +
|
p_regview = addr;
|
||||||
i * qed_hw_bar_size(cdev, 1);
|
|
||||||
rc = qed_hw_prepare_single(&cdev->hwfns[i], p_regview,
|
/* adjust doorbell bar offset for second engine */
|
||||||
|
addr = cdev->doorbells + qed_hw_bar_size(p_hwfn, 1) / 2;
|
||||||
|
p_doorbell = addr;
|
||||||
|
|
||||||
|
/* prepare second hw function */
|
||||||
|
rc = qed_hw_prepare_single(&cdev->hwfns[1], p_regview,
|
||||||
p_doorbell, personality);
|
p_doorbell, personality);
|
||||||
|
|
||||||
|
/* in case of error, need to free the previously
|
||||||
|
* initiliazed hwfn 0.
|
||||||
|
*/
|
||||||
if (rc) {
|
if (rc) {
|
||||||
/* Cleanup previously initialized hwfns */
|
qed_init_free(p_hwfn);
|
||||||
while (--i >= 0) {
|
qed_mcp_free(p_hwfn);
|
||||||
qed_init_free(&cdev->hwfns[i]);
|
qed_hw_hwfn_free(p_hwfn);
|
||||||
qed_mcp_free(&cdev->hwfns[i]);
|
|
||||||
qed_hw_hwfn_free(&cdev->hwfns[i]);
|
|
||||||
}
|
|
||||||
return rc;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
void qed_hw_remove(struct qed_dev *cdev)
|
void qed_hw_remove(struct qed_dev *cdev)
|
||||||
|
|
|
@ -783,22 +783,16 @@ void qed_int_igu_enable_int(struct qed_hwfn *p_hwfn,
|
||||||
qed_wr(p_hwfn, p_ptt, IGU_REG_PF_CONFIGURATION, igu_pf_conf);
|
qed_wr(p_hwfn, p_ptt, IGU_REG_PF_CONFIGURATION, igu_pf_conf);
|
||||||
}
|
}
|
||||||
|
|
||||||
void qed_int_igu_enable(struct qed_hwfn *p_hwfn,
|
int qed_int_igu_enable(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
|
||||||
struct qed_ptt *p_ptt,
|
enum qed_int_mode int_mode)
|
||||||
enum qed_int_mode int_mode)
|
|
||||||
{
|
{
|
||||||
int i;
|
int rc, i;
|
||||||
|
|
||||||
p_hwfn->b_int_enabled = 1;
|
|
||||||
|
|
||||||
/* Mask non-link attentions */
|
/* Mask non-link attentions */
|
||||||
for (i = 0; i < 9; i++)
|
for (i = 0; i < 9; i++)
|
||||||
qed_wr(p_hwfn, p_ptt,
|
qed_wr(p_hwfn, p_ptt,
|
||||||
MISC_REG_AEU_ENABLE1_IGU_OUT_0 + (i << 2), 0);
|
MISC_REG_AEU_ENABLE1_IGU_OUT_0 + (i << 2), 0);
|
||||||
|
|
||||||
/* Enable interrupt Generation */
|
|
||||||
qed_int_igu_enable_int(p_hwfn, p_ptt, int_mode);
|
|
||||||
|
|
||||||
/* Configure AEU signal change to produce attentions for link */
|
/* Configure AEU signal change to produce attentions for link */
|
||||||
qed_wr(p_hwfn, p_ptt, IGU_REG_LEADING_EDGE_LATCH, 0xfff);
|
qed_wr(p_hwfn, p_ptt, IGU_REG_LEADING_EDGE_LATCH, 0xfff);
|
||||||
qed_wr(p_hwfn, p_ptt, IGU_REG_TRAILING_EDGE_LATCH, 0xfff);
|
qed_wr(p_hwfn, p_ptt, IGU_REG_TRAILING_EDGE_LATCH, 0xfff);
|
||||||
|
@ -808,6 +802,19 @@ void qed_int_igu_enable(struct qed_hwfn *p_hwfn,
|
||||||
|
|
||||||
/* Unmask AEU signals toward IGU */
|
/* Unmask AEU signals toward IGU */
|
||||||
qed_wr(p_hwfn, p_ptt, MISC_REG_AEU_MASK_ATTN_IGU, 0xff);
|
qed_wr(p_hwfn, p_ptt, MISC_REG_AEU_MASK_ATTN_IGU, 0xff);
|
||||||
|
if ((int_mode != QED_INT_MODE_INTA) || IS_LEAD_HWFN(p_hwfn)) {
|
||||||
|
rc = qed_slowpath_irq_req(p_hwfn);
|
||||||
|
if (rc != 0) {
|
||||||
|
DP_NOTICE(p_hwfn, "Slowpath IRQ request failed\n");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
p_hwfn->b_int_requested = true;
|
||||||
|
}
|
||||||
|
/* Enable interrupt Generation */
|
||||||
|
qed_int_igu_enable_int(p_hwfn, p_ptt, int_mode);
|
||||||
|
p_hwfn->b_int_enabled = 1;
|
||||||
|
|
||||||
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
void qed_int_igu_disable_int(struct qed_hwfn *p_hwfn,
|
void qed_int_igu_disable_int(struct qed_hwfn *p_hwfn,
|
||||||
|
@ -1127,3 +1134,11 @@ int qed_int_get_num_sbs(struct qed_hwfn *p_hwfn,
|
||||||
|
|
||||||
return info->igu_sb_cnt;
|
return info->igu_sb_cnt;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void qed_int_disable_post_isr_release(struct qed_dev *cdev)
|
||||||
|
{
|
||||||
|
int i;
|
||||||
|
|
||||||
|
for_each_hwfn(cdev, i)
|
||||||
|
cdev->hwfns[i].b_int_requested = false;
|
||||||
|
}
|
||||||
|
|
|
@ -169,10 +169,14 @@ int qed_int_get_num_sbs(struct qed_hwfn *p_hwfn,
|
||||||
int *p_iov_blks);
|
int *p_iov_blks);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* @file
|
* @brief qed_int_disable_post_isr_release - performs the cleanup post ISR
|
||||||
|
* release. The API need to be called after releasing all slowpath IRQs
|
||||||
|
* of the device.
|
||||||
|
*
|
||||||
|
* @param cdev
|
||||||
*
|
*
|
||||||
* @brief Interrupt handler
|
|
||||||
*/
|
*/
|
||||||
|
void qed_int_disable_post_isr_release(struct qed_dev *cdev);
|
||||||
|
|
||||||
#define QED_CAU_DEF_RX_TIMER_RES 0
|
#define QED_CAU_DEF_RX_TIMER_RES 0
|
||||||
#define QED_CAU_DEF_TX_TIMER_RES 0
|
#define QED_CAU_DEF_TX_TIMER_RES 0
|
||||||
|
@ -366,10 +370,11 @@ void qed_int_setup(struct qed_hwfn *p_hwfn,
|
||||||
* @param p_hwfn
|
* @param p_hwfn
|
||||||
* @param p_ptt
|
* @param p_ptt
|
||||||
* @param int_mode
|
* @param int_mode
|
||||||
|
*
|
||||||
|
* @return int
|
||||||
*/
|
*/
|
||||||
void qed_int_igu_enable(struct qed_hwfn *p_hwfn,
|
int qed_int_igu_enable(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
|
||||||
struct qed_ptt *p_ptt,
|
enum qed_int_mode int_mode);
|
||||||
enum qed_int_mode int_mode);
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* @brief - Initialize CAU status block entry
|
* @brief - Initialize CAU status block entry
|
||||||
|
|
|
@ -476,41 +476,22 @@ static irqreturn_t qed_single_int(int irq, void *dev_instance)
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int qed_slowpath_irq_req(struct qed_dev *cdev)
|
int qed_slowpath_irq_req(struct qed_hwfn *hwfn)
|
||||||
{
|
{
|
||||||
int i = 0, rc = 0;
|
struct qed_dev *cdev = hwfn->cdev;
|
||||||
|
int rc = 0;
|
||||||
|
u8 id;
|
||||||
|
|
||||||
if (cdev->int_params.out.int_mode == QED_INT_MODE_MSIX) {
|
if (cdev->int_params.out.int_mode == QED_INT_MODE_MSIX) {
|
||||||
/* Request all the slowpath MSI-X vectors */
|
id = hwfn->my_id;
|
||||||
for (i = 0; i < cdev->num_hwfns; i++) {
|
snprintf(hwfn->name, NAME_SIZE, "sp-%d-%02x:%02x.%02x",
|
||||||
snprintf(cdev->hwfns[i].name, NAME_SIZE,
|
id, cdev->pdev->bus->number,
|
||||||
"sp-%d-%02x:%02x.%02x",
|
PCI_SLOT(cdev->pdev->devfn), hwfn->abs_pf_id);
|
||||||
i, cdev->pdev->bus->number,
|
rc = request_irq(cdev->int_params.msix_table[id].vector,
|
||||||
PCI_SLOT(cdev->pdev->devfn),
|
qed_msix_sp_int, 0, hwfn->name, hwfn->sp_dpc);
|
||||||
cdev->hwfns[i].abs_pf_id);
|
if (!rc)
|
||||||
|
DP_VERBOSE(hwfn, (NETIF_MSG_INTR | QED_MSG_SP),
|
||||||
rc = request_irq(cdev->int_params.msix_table[i].vector,
|
|
||||||
qed_msix_sp_int, 0,
|
|
||||||
cdev->hwfns[i].name,
|
|
||||||
cdev->hwfns[i].sp_dpc);
|
|
||||||
if (rc)
|
|
||||||
break;
|
|
||||||
|
|
||||||
DP_VERBOSE(&cdev->hwfns[i],
|
|
||||||
(NETIF_MSG_INTR | QED_MSG_SP),
|
|
||||||
"Requested slowpath MSI-X\n");
|
"Requested slowpath MSI-X\n");
|
||||||
}
|
|
||||||
|
|
||||||
if (i != cdev->num_hwfns) {
|
|
||||||
/* Free already request MSI-X vectors */
|
|
||||||
for (i--; i >= 0; i--) {
|
|
||||||
unsigned int vec =
|
|
||||||
cdev->int_params.msix_table[i].vector;
|
|
||||||
synchronize_irq(vec);
|
|
||||||
free_irq(cdev->int_params.msix_table[i].vector,
|
|
||||||
cdev->hwfns[i].sp_dpc);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
} else {
|
||||||
unsigned long flags = 0;
|
unsigned long flags = 0;
|
||||||
|
|
||||||
|
@ -534,13 +515,17 @@ static void qed_slowpath_irq_free(struct qed_dev *cdev)
|
||||||
|
|
||||||
if (cdev->int_params.out.int_mode == QED_INT_MODE_MSIX) {
|
if (cdev->int_params.out.int_mode == QED_INT_MODE_MSIX) {
|
||||||
for_each_hwfn(cdev, i) {
|
for_each_hwfn(cdev, i) {
|
||||||
|
if (!cdev->hwfns[i].b_int_requested)
|
||||||
|
break;
|
||||||
synchronize_irq(cdev->int_params.msix_table[i].vector);
|
synchronize_irq(cdev->int_params.msix_table[i].vector);
|
||||||
free_irq(cdev->int_params.msix_table[i].vector,
|
free_irq(cdev->int_params.msix_table[i].vector,
|
||||||
cdev->hwfns[i].sp_dpc);
|
cdev->hwfns[i].sp_dpc);
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
free_irq(cdev->pdev->irq, cdev);
|
if (QED_LEADING_HWFN(cdev)->b_int_requested)
|
||||||
|
free_irq(cdev->pdev->irq, cdev);
|
||||||
}
|
}
|
||||||
|
qed_int_disable_post_isr_release(cdev);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int qed_nic_stop(struct qed_dev *cdev)
|
static int qed_nic_stop(struct qed_dev *cdev)
|
||||||
|
@ -765,16 +750,11 @@ static int qed_slowpath_start(struct qed_dev *cdev,
|
||||||
if (rc)
|
if (rc)
|
||||||
goto err1;
|
goto err1;
|
||||||
|
|
||||||
/* Request the slowpath IRQ */
|
|
||||||
rc = qed_slowpath_irq_req(cdev);
|
|
||||||
if (rc)
|
|
||||||
goto err2;
|
|
||||||
|
|
||||||
/* Allocate stream for unzipping */
|
/* Allocate stream for unzipping */
|
||||||
rc = qed_alloc_stream_mem(cdev);
|
rc = qed_alloc_stream_mem(cdev);
|
||||||
if (rc) {
|
if (rc) {
|
||||||
DP_NOTICE(cdev, "Failed to allocate stream memory\n");
|
DP_NOTICE(cdev, "Failed to allocate stream memory\n");
|
||||||
goto err3;
|
goto err2;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Start the slowpath */
|
/* Start the slowpath */
|
||||||
|
|
|
@ -363,4 +363,8 @@
|
||||||
0x7 << 0)
|
0x7 << 0)
|
||||||
#define MCP_REG_NVM_CFG4_FLASH_SIZE_SHIFT \
|
#define MCP_REG_NVM_CFG4_FLASH_SIZE_SHIFT \
|
||||||
0
|
0
|
||||||
|
#define PGLUE_B_REG_PF_BAR0_SIZE \
|
||||||
|
0x2aae60UL
|
||||||
|
#define PGLUE_B_REG_PF_BAR1_SIZE \
|
||||||
|
0x2aae64UL
|
||||||
#endif
|
#endif
|
||||||
|
|
|
@ -124,8 +124,12 @@ struct qed_spq {
|
||||||
dma_addr_t p_phys;
|
dma_addr_t p_phys;
|
||||||
struct qed_spq_entry *p_virt;
|
struct qed_spq_entry *p_virt;
|
||||||
|
|
||||||
/* Used as index for completions (returns on EQ by FW) */
|
#define SPQ_RING_SIZE \
|
||||||
u16 echo_idx;
|
(CORE_SPQE_PAGE_SIZE_BYTES / sizeof(struct slow_path_element))
|
||||||
|
|
||||||
|
/* Bitmap for handling out-of-order completions */
|
||||||
|
DECLARE_BITMAP(p_comp_bitmap, SPQ_RING_SIZE);
|
||||||
|
u8 comp_bitmap_idx;
|
||||||
|
|
||||||
/* Statistics */
|
/* Statistics */
|
||||||
u32 unlimited_pending_count;
|
u32 unlimited_pending_count;
|
||||||
|
|
|
@ -112,8 +112,6 @@ static int
|
||||||
qed_spq_fill_entry(struct qed_hwfn *p_hwfn,
|
qed_spq_fill_entry(struct qed_hwfn *p_hwfn,
|
||||||
struct qed_spq_entry *p_ent)
|
struct qed_spq_entry *p_ent)
|
||||||
{
|
{
|
||||||
p_ent->elem.hdr.echo = 0;
|
|
||||||
p_hwfn->p_spq->echo_idx++;
|
|
||||||
p_ent->flags = 0;
|
p_ent->flags = 0;
|
||||||
|
|
||||||
switch (p_ent->comp_mode) {
|
switch (p_ent->comp_mode) {
|
||||||
|
@ -195,10 +193,12 @@ static int qed_spq_hw_post(struct qed_hwfn *p_hwfn,
|
||||||
struct qed_spq *p_spq,
|
struct qed_spq *p_spq,
|
||||||
struct qed_spq_entry *p_ent)
|
struct qed_spq_entry *p_ent)
|
||||||
{
|
{
|
||||||
struct qed_chain *p_chain = &p_hwfn->p_spq->chain;
|
struct qed_chain *p_chain = &p_hwfn->p_spq->chain;
|
||||||
|
u16 echo = qed_chain_get_prod_idx(p_chain);
|
||||||
struct slow_path_element *elem;
|
struct slow_path_element *elem;
|
||||||
struct core_db_data db;
|
struct core_db_data db;
|
||||||
|
|
||||||
|
p_ent->elem.hdr.echo = cpu_to_le16(echo);
|
||||||
elem = qed_chain_produce(p_chain);
|
elem = qed_chain_produce(p_chain);
|
||||||
if (!elem) {
|
if (!elem) {
|
||||||
DP_NOTICE(p_hwfn, "Failed to produce from SPQ chain\n");
|
DP_NOTICE(p_hwfn, "Failed to produce from SPQ chain\n");
|
||||||
|
@ -437,7 +437,9 @@ void qed_spq_setup(struct qed_hwfn *p_hwfn)
|
||||||
p_spq->comp_count = 0;
|
p_spq->comp_count = 0;
|
||||||
p_spq->comp_sent_count = 0;
|
p_spq->comp_sent_count = 0;
|
||||||
p_spq->unlimited_pending_count = 0;
|
p_spq->unlimited_pending_count = 0;
|
||||||
p_spq->echo_idx = 0;
|
|
||||||
|
bitmap_zero(p_spq->p_comp_bitmap, SPQ_RING_SIZE);
|
||||||
|
p_spq->comp_bitmap_idx = 0;
|
||||||
|
|
||||||
/* SPQ cid, cannot fail */
|
/* SPQ cid, cannot fail */
|
||||||
qed_cxt_acquire_cid(p_hwfn, PROTOCOLID_CORE, &p_spq->cid);
|
qed_cxt_acquire_cid(p_hwfn, PROTOCOLID_CORE, &p_spq->cid);
|
||||||
|
@ -582,26 +584,32 @@ qed_spq_add_entry(struct qed_hwfn *p_hwfn,
|
||||||
struct qed_spq *p_spq = p_hwfn->p_spq;
|
struct qed_spq *p_spq = p_hwfn->p_spq;
|
||||||
|
|
||||||
if (p_ent->queue == &p_spq->unlimited_pending) {
|
if (p_ent->queue == &p_spq->unlimited_pending) {
|
||||||
struct qed_spq_entry *p_en2;
|
|
||||||
|
|
||||||
if (list_empty(&p_spq->free_pool)) {
|
if (list_empty(&p_spq->free_pool)) {
|
||||||
list_add_tail(&p_ent->list, &p_spq->unlimited_pending);
|
list_add_tail(&p_ent->list, &p_spq->unlimited_pending);
|
||||||
p_spq->unlimited_pending_count++;
|
p_spq->unlimited_pending_count++;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
} else {
|
||||||
|
struct qed_spq_entry *p_en2;
|
||||||
|
|
||||||
|
p_en2 = list_first_entry(&p_spq->free_pool,
|
||||||
|
struct qed_spq_entry,
|
||||||
|
list);
|
||||||
|
list_del(&p_en2->list);
|
||||||
|
|
||||||
|
/* Copy the ring element physical pointer to the new
|
||||||
|
* entry, since we are about to override the entire ring
|
||||||
|
* entry and don't want to lose the pointer.
|
||||||
|
*/
|
||||||
|
p_ent->elem.data_ptr = p_en2->elem.data_ptr;
|
||||||
|
|
||||||
|
*p_en2 = *p_ent;
|
||||||
|
|
||||||
|
kfree(p_ent);
|
||||||
|
|
||||||
|
p_ent = p_en2;
|
||||||
}
|
}
|
||||||
|
|
||||||
p_en2 = list_first_entry(&p_spq->free_pool,
|
|
||||||
struct qed_spq_entry,
|
|
||||||
list);
|
|
||||||
list_del(&p_en2->list);
|
|
||||||
|
|
||||||
/* Strcut assignment */
|
|
||||||
*p_en2 = *p_ent;
|
|
||||||
|
|
||||||
kfree(p_ent);
|
|
||||||
|
|
||||||
p_ent = p_en2;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* entry is to be placed in 'pending' queue */
|
/* entry is to be placed in 'pending' queue */
|
||||||
|
@ -777,13 +785,38 @@ int qed_spq_completion(struct qed_hwfn *p_hwfn,
|
||||||
list_for_each_entry_safe(p_ent, tmp, &p_spq->completion_pending,
|
list_for_each_entry_safe(p_ent, tmp, &p_spq->completion_pending,
|
||||||
list) {
|
list) {
|
||||||
if (p_ent->elem.hdr.echo == echo) {
|
if (p_ent->elem.hdr.echo == echo) {
|
||||||
|
u16 pos = le16_to_cpu(echo) % SPQ_RING_SIZE;
|
||||||
|
|
||||||
list_del(&p_ent->list);
|
list_del(&p_ent->list);
|
||||||
|
|
||||||
qed_chain_return_produced(&p_spq->chain);
|
/* Avoid overriding of SPQ entries when getting
|
||||||
|
* out-of-order completions, by marking the completions
|
||||||
|
* in a bitmap and increasing the chain consumer only
|
||||||
|
* for the first successive completed entries.
|
||||||
|
*/
|
||||||
|
bitmap_set(p_spq->p_comp_bitmap, pos, SPQ_RING_SIZE);
|
||||||
|
|
||||||
|
while (test_bit(p_spq->comp_bitmap_idx,
|
||||||
|
p_spq->p_comp_bitmap)) {
|
||||||
|
bitmap_clear(p_spq->p_comp_bitmap,
|
||||||
|
p_spq->comp_bitmap_idx,
|
||||||
|
SPQ_RING_SIZE);
|
||||||
|
p_spq->comp_bitmap_idx++;
|
||||||
|
qed_chain_return_produced(&p_spq->chain);
|
||||||
|
}
|
||||||
|
|
||||||
p_spq->comp_count++;
|
p_spq->comp_count++;
|
||||||
found = p_ent;
|
found = p_ent;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* This is relatively uncommon - depends on scenarios
|
||||||
|
* which have mutliple per-PF sent ramrods.
|
||||||
|
*/
|
||||||
|
DP_VERBOSE(p_hwfn, QED_MSG_SPQ,
|
||||||
|
"Got completion for echo %04x - doesn't match echo %04x in completion pending list\n",
|
||||||
|
le16_to_cpu(echo),
|
||||||
|
le16_to_cpu(p_ent->elem.hdr.echo));
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Release lock before callback, as callback may post
|
/* Release lock before callback, as callback may post
|
||||||
|
|
|
@ -246,7 +246,8 @@ int qlcnic_83xx_check_vnic_state(struct qlcnic_adapter *adapter)
|
||||||
u32 state;
|
u32 state;
|
||||||
|
|
||||||
state = QLCRDX(ahw, QLC_83XX_VNIC_STATE);
|
state = QLCRDX(ahw, QLC_83XX_VNIC_STATE);
|
||||||
while (state != QLCNIC_DEV_NPAR_OPER && idc->vnic_wait_limit--) {
|
while (state != QLCNIC_DEV_NPAR_OPER && idc->vnic_wait_limit) {
|
||||||
|
idc->vnic_wait_limit--;
|
||||||
msleep(1000);
|
msleep(1000);
|
||||||
state = QLCRDX(ahw, QLC_83XX_VNIC_STATE);
|
state = QLCRDX(ahw, QLC_83XX_VNIC_STATE);
|
||||||
}
|
}
|
||||||
|
|
|
@ -4211,8 +4211,9 @@ static int ql_change_rx_buffers(struct ql_adapter *qdev)
|
||||||
|
|
||||||
/* Wait for an outstanding reset to complete. */
|
/* Wait for an outstanding reset to complete. */
|
||||||
if (!test_bit(QL_ADAPTER_UP, &qdev->flags)) {
|
if (!test_bit(QL_ADAPTER_UP, &qdev->flags)) {
|
||||||
int i = 3;
|
int i = 4;
|
||||||
while (i-- && !test_bit(QL_ADAPTER_UP, &qdev->flags)) {
|
|
||||||
|
while (--i && !test_bit(QL_ADAPTER_UP, &qdev->flags)) {
|
||||||
netif_err(qdev, ifup, qdev->ndev,
|
netif_err(qdev, ifup, qdev->ndev,
|
||||||
"Waiting for adapter UP...\n");
|
"Waiting for adapter UP...\n");
|
||||||
ssleep(1);
|
ssleep(1);
|
||||||
|
|
|
@ -736,9 +736,8 @@ qcaspi_netdev_tx_timeout(struct net_device *dev)
|
||||||
netdev_info(qca->net_dev, "Transmit timeout at %ld, latency %ld\n",
|
netdev_info(qca->net_dev, "Transmit timeout at %ld, latency %ld\n",
|
||||||
jiffies, jiffies - dev->trans_start);
|
jiffies, jiffies - dev->trans_start);
|
||||||
qca->net_dev->stats.tx_errors++;
|
qca->net_dev->stats.tx_errors++;
|
||||||
/* wake the queue if there is room */
|
/* Trigger tx queue flush and QCA7000 reset */
|
||||||
if (qcaspi_tx_ring_has_space(&qca->txr))
|
qca->sync = QCASPI_SYNC_UNKNOWN;
|
||||||
netif_wake_queue(dev);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int
|
static int
|
||||||
|
|
|
@ -905,6 +905,9 @@ static int ravb_phy_init(struct net_device *ndev)
|
||||||
netdev_info(ndev, "limited PHY to 100Mbit/s\n");
|
netdev_info(ndev, "limited PHY to 100Mbit/s\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* 10BASE is not supported */
|
||||||
|
phydev->supported &= ~PHY_10BT_FEATURES;
|
||||||
|
|
||||||
netdev_info(ndev, "attached PHY %d (IRQ %d) to driver %s\n",
|
netdev_info(ndev, "attached PHY %d (IRQ %d) to driver %s\n",
|
||||||
phydev->addr, phydev->irq, phydev->drv->name);
|
phydev->addr, phydev->irq, phydev->drv->name);
|
||||||
|
|
||||||
|
@ -1037,7 +1040,7 @@ static const char ravb_gstrings_stats[][ETH_GSTRING_LEN] = {
|
||||||
"rx_queue_1_mcast_packets",
|
"rx_queue_1_mcast_packets",
|
||||||
"rx_queue_1_errors",
|
"rx_queue_1_errors",
|
||||||
"rx_queue_1_crc_errors",
|
"rx_queue_1_crc_errors",
|
||||||
"rx_queue_1_frame_errors_",
|
"rx_queue_1_frame_errors",
|
||||||
"rx_queue_1_length_errors",
|
"rx_queue_1_length_errors",
|
||||||
"rx_queue_1_missed_errors",
|
"rx_queue_1_missed_errors",
|
||||||
"rx_queue_1_over_errors",
|
"rx_queue_1_over_errors",
|
||||||
|
|
|
@ -52,6 +52,8 @@
|
||||||
NETIF_MSG_RX_ERR| \
|
NETIF_MSG_RX_ERR| \
|
||||||
NETIF_MSG_TX_ERR)
|
NETIF_MSG_TX_ERR)
|
||||||
|
|
||||||
|
#define SH_ETH_OFFSET_INVALID ((u16)~0)
|
||||||
|
|
||||||
#define SH_ETH_OFFSET_DEFAULTS \
|
#define SH_ETH_OFFSET_DEFAULTS \
|
||||||
[0 ... SH_ETH_MAX_REGISTER_OFFSET - 1] = SH_ETH_OFFSET_INVALID
|
[0 ... SH_ETH_MAX_REGISTER_OFFSET - 1] = SH_ETH_OFFSET_INVALID
|
||||||
|
|
||||||
|
@ -404,6 +406,28 @@ static const u16 sh_eth_offset_fast_sh3_sh2[SH_ETH_MAX_REGISTER_OFFSET] = {
|
||||||
static void sh_eth_rcv_snd_disable(struct net_device *ndev);
|
static void sh_eth_rcv_snd_disable(struct net_device *ndev);
|
||||||
static struct net_device_stats *sh_eth_get_stats(struct net_device *ndev);
|
static struct net_device_stats *sh_eth_get_stats(struct net_device *ndev);
|
||||||
|
|
||||||
|
static void sh_eth_write(struct net_device *ndev, u32 data, int enum_index)
|
||||||
|
{
|
||||||
|
struct sh_eth_private *mdp = netdev_priv(ndev);
|
||||||
|
u16 offset = mdp->reg_offset[enum_index];
|
||||||
|
|
||||||
|
if (WARN_ON(offset == SH_ETH_OFFSET_INVALID))
|
||||||
|
return;
|
||||||
|
|
||||||
|
iowrite32(data, mdp->addr + offset);
|
||||||
|
}
|
||||||
|
|
||||||
|
static u32 sh_eth_read(struct net_device *ndev, int enum_index)
|
||||||
|
{
|
||||||
|
struct sh_eth_private *mdp = netdev_priv(ndev);
|
||||||
|
u16 offset = mdp->reg_offset[enum_index];
|
||||||
|
|
||||||
|
if (WARN_ON(offset == SH_ETH_OFFSET_INVALID))
|
||||||
|
return ~0U;
|
||||||
|
|
||||||
|
return ioread32(mdp->addr + offset);
|
||||||
|
}
|
||||||
|
|
||||||
static bool sh_eth_is_gether(struct sh_eth_private *mdp)
|
static bool sh_eth_is_gether(struct sh_eth_private *mdp)
|
||||||
{
|
{
|
||||||
return mdp->reg_offset == sh_eth_offset_gigabit;
|
return mdp->reg_offset == sh_eth_offset_gigabit;
|
||||||
|
@ -1172,7 +1196,7 @@ static void sh_eth_ring_format(struct net_device *ndev)
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
mdp->rx_skbuff[i] = skb;
|
mdp->rx_skbuff[i] = skb;
|
||||||
rxdesc->addr = dma_addr;
|
rxdesc->addr = cpu_to_edmac(mdp, dma_addr);
|
||||||
rxdesc->status = cpu_to_edmac(mdp, RD_RACT | RD_RFP);
|
rxdesc->status = cpu_to_edmac(mdp, RD_RACT | RD_RFP);
|
||||||
|
|
||||||
/* Rx descriptor address set */
|
/* Rx descriptor address set */
|
||||||
|
@ -1403,7 +1427,8 @@ static int sh_eth_txfree(struct net_device *ndev)
|
||||||
entry, edmac_to_cpu(mdp, txdesc->status));
|
entry, edmac_to_cpu(mdp, txdesc->status));
|
||||||
/* Free the original skb. */
|
/* Free the original skb. */
|
||||||
if (mdp->tx_skbuff[entry]) {
|
if (mdp->tx_skbuff[entry]) {
|
||||||
dma_unmap_single(&ndev->dev, txdesc->addr,
|
dma_unmap_single(&ndev->dev,
|
||||||
|
edmac_to_cpu(mdp, txdesc->addr),
|
||||||
txdesc->buffer_length, DMA_TO_DEVICE);
|
txdesc->buffer_length, DMA_TO_DEVICE);
|
||||||
dev_kfree_skb_irq(mdp->tx_skbuff[entry]);
|
dev_kfree_skb_irq(mdp->tx_skbuff[entry]);
|
||||||
mdp->tx_skbuff[entry] = NULL;
|
mdp->tx_skbuff[entry] = NULL;
|
||||||
|
@ -1462,6 +1487,7 @@ static int sh_eth_rx(struct net_device *ndev, u32 intr_status, int *quota)
|
||||||
if (mdp->cd->shift_rd0)
|
if (mdp->cd->shift_rd0)
|
||||||
desc_status >>= 16;
|
desc_status >>= 16;
|
||||||
|
|
||||||
|
skb = mdp->rx_skbuff[entry];
|
||||||
if (desc_status & (RD_RFS1 | RD_RFS2 | RD_RFS3 | RD_RFS4 |
|
if (desc_status & (RD_RFS1 | RD_RFS2 | RD_RFS3 | RD_RFS4 |
|
||||||
RD_RFS5 | RD_RFS6 | RD_RFS10)) {
|
RD_RFS5 | RD_RFS6 | RD_RFS10)) {
|
||||||
ndev->stats.rx_errors++;
|
ndev->stats.rx_errors++;
|
||||||
|
@ -1477,16 +1503,16 @@ static int sh_eth_rx(struct net_device *ndev, u32 intr_status, int *quota)
|
||||||
ndev->stats.rx_missed_errors++;
|
ndev->stats.rx_missed_errors++;
|
||||||
if (desc_status & RD_RFS10)
|
if (desc_status & RD_RFS10)
|
||||||
ndev->stats.rx_over_errors++;
|
ndev->stats.rx_over_errors++;
|
||||||
} else {
|
} else if (skb) {
|
||||||
|
dma_addr = edmac_to_cpu(mdp, rxdesc->addr);
|
||||||
if (!mdp->cd->hw_swap)
|
if (!mdp->cd->hw_swap)
|
||||||
sh_eth_soft_swap(
|
sh_eth_soft_swap(
|
||||||
phys_to_virt(ALIGN(rxdesc->addr, 4)),
|
phys_to_virt(ALIGN(dma_addr, 4)),
|
||||||
pkt_len + 2);
|
pkt_len + 2);
|
||||||
skb = mdp->rx_skbuff[entry];
|
|
||||||
mdp->rx_skbuff[entry] = NULL;
|
mdp->rx_skbuff[entry] = NULL;
|
||||||
if (mdp->cd->rpadir)
|
if (mdp->cd->rpadir)
|
||||||
skb_reserve(skb, NET_IP_ALIGN);
|
skb_reserve(skb, NET_IP_ALIGN);
|
||||||
dma_unmap_single(&ndev->dev, rxdesc->addr,
|
dma_unmap_single(&ndev->dev, dma_addr,
|
||||||
ALIGN(mdp->rx_buf_sz, 32),
|
ALIGN(mdp->rx_buf_sz, 32),
|
||||||
DMA_FROM_DEVICE);
|
DMA_FROM_DEVICE);
|
||||||
skb_put(skb, pkt_len);
|
skb_put(skb, pkt_len);
|
||||||
|
@ -1523,7 +1549,7 @@ static int sh_eth_rx(struct net_device *ndev, u32 intr_status, int *quota)
|
||||||
mdp->rx_skbuff[entry] = skb;
|
mdp->rx_skbuff[entry] = skb;
|
||||||
|
|
||||||
skb_checksum_none_assert(skb);
|
skb_checksum_none_assert(skb);
|
||||||
rxdesc->addr = dma_addr;
|
rxdesc->addr = cpu_to_edmac(mdp, dma_addr);
|
||||||
}
|
}
|
||||||
dma_wmb(); /* RACT bit must be set after all the above writes */
|
dma_wmb(); /* RACT bit must be set after all the above writes */
|
||||||
if (entry >= mdp->num_rx_ring - 1)
|
if (entry >= mdp->num_rx_ring - 1)
|
||||||
|
@ -2331,8 +2357,8 @@ static void sh_eth_tx_timeout(struct net_device *ndev)
|
||||||
/* Free all the skbuffs in the Rx queue. */
|
/* Free all the skbuffs in the Rx queue. */
|
||||||
for (i = 0; i < mdp->num_rx_ring; i++) {
|
for (i = 0; i < mdp->num_rx_ring; i++) {
|
||||||
rxdesc = &mdp->rx_ring[i];
|
rxdesc = &mdp->rx_ring[i];
|
||||||
rxdesc->status = 0;
|
rxdesc->status = cpu_to_edmac(mdp, 0);
|
||||||
rxdesc->addr = 0xBADF00D0;
|
rxdesc->addr = cpu_to_edmac(mdp, 0xBADF00D0);
|
||||||
dev_kfree_skb(mdp->rx_skbuff[i]);
|
dev_kfree_skb(mdp->rx_skbuff[i]);
|
||||||
mdp->rx_skbuff[i] = NULL;
|
mdp->rx_skbuff[i] = NULL;
|
||||||
}
|
}
|
||||||
|
@ -2350,6 +2376,7 @@ static int sh_eth_start_xmit(struct sk_buff *skb, struct net_device *ndev)
|
||||||
{
|
{
|
||||||
struct sh_eth_private *mdp = netdev_priv(ndev);
|
struct sh_eth_private *mdp = netdev_priv(ndev);
|
||||||
struct sh_eth_txdesc *txdesc;
|
struct sh_eth_txdesc *txdesc;
|
||||||
|
dma_addr_t dma_addr;
|
||||||
u32 entry;
|
u32 entry;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
|
@ -2372,14 +2399,14 @@ static int sh_eth_start_xmit(struct sk_buff *skb, struct net_device *ndev)
|
||||||
txdesc = &mdp->tx_ring[entry];
|
txdesc = &mdp->tx_ring[entry];
|
||||||
/* soft swap. */
|
/* soft swap. */
|
||||||
if (!mdp->cd->hw_swap)
|
if (!mdp->cd->hw_swap)
|
||||||
sh_eth_soft_swap(phys_to_virt(ALIGN(txdesc->addr, 4)),
|
sh_eth_soft_swap(PTR_ALIGN(skb->data, 4), skb->len + 2);
|
||||||
skb->len + 2);
|
dma_addr = dma_map_single(&ndev->dev, skb->data, skb->len,
|
||||||
txdesc->addr = dma_map_single(&ndev->dev, skb->data, skb->len,
|
DMA_TO_DEVICE);
|
||||||
DMA_TO_DEVICE);
|
if (dma_mapping_error(&ndev->dev, dma_addr)) {
|
||||||
if (dma_mapping_error(&ndev->dev, txdesc->addr)) {
|
|
||||||
kfree_skb(skb);
|
kfree_skb(skb);
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
}
|
}
|
||||||
|
txdesc->addr = cpu_to_edmac(mdp, dma_addr);
|
||||||
txdesc->buffer_length = skb->len;
|
txdesc->buffer_length = skb->len;
|
||||||
|
|
||||||
dma_wmb(); /* TACT bit must be set after all the above writes */
|
dma_wmb(); /* TACT bit must be set after all the above writes */
|
||||||
|
|
|
@ -546,31 +546,6 @@ static inline void sh_eth_soft_swap(char *src, int len)
|
||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
|
|
||||||
#define SH_ETH_OFFSET_INVALID ((u16) ~0)
|
|
||||||
|
|
||||||
static inline void sh_eth_write(struct net_device *ndev, u32 data,
|
|
||||||
int enum_index)
|
|
||||||
{
|
|
||||||
struct sh_eth_private *mdp = netdev_priv(ndev);
|
|
||||||
u16 offset = mdp->reg_offset[enum_index];
|
|
||||||
|
|
||||||
if (WARN_ON(offset == SH_ETH_OFFSET_INVALID))
|
|
||||||
return;
|
|
||||||
|
|
||||||
iowrite32(data, mdp->addr + offset);
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline u32 sh_eth_read(struct net_device *ndev, int enum_index)
|
|
||||||
{
|
|
||||||
struct sh_eth_private *mdp = netdev_priv(ndev);
|
|
||||||
u16 offset = mdp->reg_offset[enum_index];
|
|
||||||
|
|
||||||
if (WARN_ON(offset == SH_ETH_OFFSET_INVALID))
|
|
||||||
return ~0U;
|
|
||||||
|
|
||||||
return ioread32(mdp->addr + offset);
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void *sh_eth_tsu_get_offset(struct sh_eth_private *mdp,
|
static inline void *sh_eth_tsu_get_offset(struct sh_eth_private *mdp,
|
||||||
int enum_index)
|
int enum_index)
|
||||||
{
|
{
|
||||||
|
|
|
@ -3299,7 +3299,8 @@ static int efx_ef10_filter_remove_internal(struct efx_nic *efx,
|
||||||
|
|
||||||
new_spec.priority = EFX_FILTER_PRI_AUTO;
|
new_spec.priority = EFX_FILTER_PRI_AUTO;
|
||||||
new_spec.flags = (EFX_FILTER_FLAG_RX |
|
new_spec.flags = (EFX_FILTER_FLAG_RX |
|
||||||
EFX_FILTER_FLAG_RX_RSS);
|
(efx_rss_enabled(efx) ?
|
||||||
|
EFX_FILTER_FLAG_RX_RSS : 0));
|
||||||
new_spec.dmaq_id = 0;
|
new_spec.dmaq_id = 0;
|
||||||
new_spec.rss_context = EFX_FILTER_RSS_CONTEXT_DEFAULT;
|
new_spec.rss_context = EFX_FILTER_RSS_CONTEXT_DEFAULT;
|
||||||
rc = efx_ef10_filter_push(efx, &new_spec,
|
rc = efx_ef10_filter_push(efx, &new_spec,
|
||||||
|
@ -3921,6 +3922,7 @@ static int efx_ef10_filter_insert_addr_list(struct efx_nic *efx,
|
||||||
{
|
{
|
||||||
struct efx_ef10_filter_table *table = efx->filter_state;
|
struct efx_ef10_filter_table *table = efx->filter_state;
|
||||||
struct efx_ef10_dev_addr *addr_list;
|
struct efx_ef10_dev_addr *addr_list;
|
||||||
|
enum efx_filter_flags filter_flags;
|
||||||
struct efx_filter_spec spec;
|
struct efx_filter_spec spec;
|
||||||
u8 baddr[ETH_ALEN];
|
u8 baddr[ETH_ALEN];
|
||||||
unsigned int i, j;
|
unsigned int i, j;
|
||||||
|
@ -3935,11 +3937,11 @@ static int efx_ef10_filter_insert_addr_list(struct efx_nic *efx,
|
||||||
addr_count = table->dev_uc_count;
|
addr_count = table->dev_uc_count;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
filter_flags = efx_rss_enabled(efx) ? EFX_FILTER_FLAG_RX_RSS : 0;
|
||||||
|
|
||||||
/* Insert/renew filters */
|
/* Insert/renew filters */
|
||||||
for (i = 0; i < addr_count; i++) {
|
for (i = 0; i < addr_count; i++) {
|
||||||
efx_filter_init_rx(&spec, EFX_FILTER_PRI_AUTO,
|
efx_filter_init_rx(&spec, EFX_FILTER_PRI_AUTO, filter_flags, 0);
|
||||||
EFX_FILTER_FLAG_RX_RSS,
|
|
||||||
0);
|
|
||||||
efx_filter_set_eth_local(&spec, EFX_FILTER_VID_UNSPEC,
|
efx_filter_set_eth_local(&spec, EFX_FILTER_VID_UNSPEC,
|
||||||
addr_list[i].addr);
|
addr_list[i].addr);
|
||||||
rc = efx_ef10_filter_insert(efx, &spec, true);
|
rc = efx_ef10_filter_insert(efx, &spec, true);
|
||||||
|
@ -3968,9 +3970,7 @@ static int efx_ef10_filter_insert_addr_list(struct efx_nic *efx,
|
||||||
|
|
||||||
if (multicast && rollback) {
|
if (multicast && rollback) {
|
||||||
/* Also need an Ethernet broadcast filter */
|
/* Also need an Ethernet broadcast filter */
|
||||||
efx_filter_init_rx(&spec, EFX_FILTER_PRI_AUTO,
|
efx_filter_init_rx(&spec, EFX_FILTER_PRI_AUTO, filter_flags, 0);
|
||||||
EFX_FILTER_FLAG_RX_RSS,
|
|
||||||
0);
|
|
||||||
eth_broadcast_addr(baddr);
|
eth_broadcast_addr(baddr);
|
||||||
efx_filter_set_eth_local(&spec, EFX_FILTER_VID_UNSPEC, baddr);
|
efx_filter_set_eth_local(&spec, EFX_FILTER_VID_UNSPEC, baddr);
|
||||||
rc = efx_ef10_filter_insert(efx, &spec, true);
|
rc = efx_ef10_filter_insert(efx, &spec, true);
|
||||||
|
@ -4000,13 +4000,14 @@ static int efx_ef10_filter_insert_def(struct efx_nic *efx, bool multicast,
|
||||||
{
|
{
|
||||||
struct efx_ef10_filter_table *table = efx->filter_state;
|
struct efx_ef10_filter_table *table = efx->filter_state;
|
||||||
struct efx_ef10_nic_data *nic_data = efx->nic_data;
|
struct efx_ef10_nic_data *nic_data = efx->nic_data;
|
||||||
|
enum efx_filter_flags filter_flags;
|
||||||
struct efx_filter_spec spec;
|
struct efx_filter_spec spec;
|
||||||
u8 baddr[ETH_ALEN];
|
u8 baddr[ETH_ALEN];
|
||||||
int rc;
|
int rc;
|
||||||
|
|
||||||
efx_filter_init_rx(&spec, EFX_FILTER_PRI_AUTO,
|
filter_flags = efx_rss_enabled(efx) ? EFX_FILTER_FLAG_RX_RSS : 0;
|
||||||
EFX_FILTER_FLAG_RX_RSS,
|
|
||||||
0);
|
efx_filter_init_rx(&spec, EFX_FILTER_PRI_AUTO, filter_flags, 0);
|
||||||
|
|
||||||
if (multicast)
|
if (multicast)
|
||||||
efx_filter_set_mc_def(&spec);
|
efx_filter_set_mc_def(&spec);
|
||||||
|
@ -4023,8 +4024,7 @@ static int efx_ef10_filter_insert_def(struct efx_nic *efx, bool multicast,
|
||||||
if (!nic_data->workaround_26807) {
|
if (!nic_data->workaround_26807) {
|
||||||
/* Also need an Ethernet broadcast filter */
|
/* Also need an Ethernet broadcast filter */
|
||||||
efx_filter_init_rx(&spec, EFX_FILTER_PRI_AUTO,
|
efx_filter_init_rx(&spec, EFX_FILTER_PRI_AUTO,
|
||||||
EFX_FILTER_FLAG_RX_RSS,
|
filter_flags, 0);
|
||||||
0);
|
|
||||||
eth_broadcast_addr(baddr);
|
eth_broadcast_addr(baddr);
|
||||||
efx_filter_set_eth_local(&spec, EFX_FILTER_VID_UNSPEC,
|
efx_filter_set_eth_local(&spec, EFX_FILTER_VID_UNSPEC,
|
||||||
baddr);
|
baddr);
|
||||||
|
|
|
@ -76,6 +76,11 @@ void efx_schedule_slow_fill(struct efx_rx_queue *rx_queue);
|
||||||
#define EFX_TXQ_MAX_ENT(efx) (EFX_WORKAROUND_35388(efx) ? \
|
#define EFX_TXQ_MAX_ENT(efx) (EFX_WORKAROUND_35388(efx) ? \
|
||||||
EFX_MAX_DMAQ_SIZE / 2 : EFX_MAX_DMAQ_SIZE)
|
EFX_MAX_DMAQ_SIZE / 2 : EFX_MAX_DMAQ_SIZE)
|
||||||
|
|
||||||
|
static inline bool efx_rss_enabled(struct efx_nic *efx)
|
||||||
|
{
|
||||||
|
return efx->rss_spread > 1;
|
||||||
|
}
|
||||||
|
|
||||||
/* Filters */
|
/* Filters */
|
||||||
|
|
||||||
void efx_mac_reconfigure(struct efx_nic *efx);
|
void efx_mac_reconfigure(struct efx_nic *efx);
|
||||||
|
|
|
@ -2242,7 +2242,7 @@ efx_farch_filter_init_rx_auto(struct efx_nic *efx,
|
||||||
*/
|
*/
|
||||||
spec->priority = EFX_FILTER_PRI_AUTO;
|
spec->priority = EFX_FILTER_PRI_AUTO;
|
||||||
spec->flags = (EFX_FILTER_FLAG_RX |
|
spec->flags = (EFX_FILTER_FLAG_RX |
|
||||||
(efx->n_rx_channels > 1 ? EFX_FILTER_FLAG_RX_RSS : 0) |
|
(efx_rss_enabled(efx) ? EFX_FILTER_FLAG_RX_RSS : 0) |
|
||||||
(efx->rx_scatter ? EFX_FILTER_FLAG_RX_SCATTER : 0));
|
(efx->rx_scatter ? EFX_FILTER_FLAG_RX_SCATTER : 0));
|
||||||
spec->dmaq_id = 0;
|
spec->dmaq_id = 0;
|
||||||
}
|
}
|
||||||
|
|
|
@ -418,7 +418,7 @@ static void txc_reset_logic_mmd(struct efx_nic *efx, int mmd)
|
||||||
|
|
||||||
val |= (1 << TXC_GLCMD_LMTSWRST_LBN);
|
val |= (1 << TXC_GLCMD_LMTSWRST_LBN);
|
||||||
efx_mdio_write(efx, mmd, TXC_GLRGS_GLCMD, val);
|
efx_mdio_write(efx, mmd, TXC_GLRGS_GLCMD, val);
|
||||||
while (tries--) {
|
while (--tries) {
|
||||||
val = efx_mdio_read(efx, mmd, TXC_GLRGS_GLCMD);
|
val = efx_mdio_read(efx, mmd, TXC_GLRGS_GLCMD);
|
||||||
if (!(val & (1 << TXC_GLCMD_LMTSWRST_LBN)))
|
if (!(val & (1 << TXC_GLCMD_LMTSWRST_LBN)))
|
||||||
break;
|
break;
|
||||||
|
|
|
@ -153,7 +153,11 @@ static int sun7i_gmac_probe(struct platform_device *pdev)
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
|
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
|
||||||
|
if (ret)
|
||||||
|
sun7i_gmac_exit(pdev, plat_dat->bsp_priv);
|
||||||
|
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static const struct of_device_id sun7i_dwmac_match[] = {
|
static const struct of_device_id sun7i_dwmac_match[] = {
|
||||||
|
|
|
@ -3046,8 +3046,6 @@ int stmmac_suspend(struct net_device *ndev)
|
||||||
priv->hw->dma->stop_tx(priv->ioaddr);
|
priv->hw->dma->stop_tx(priv->ioaddr);
|
||||||
priv->hw->dma->stop_rx(priv->ioaddr);
|
priv->hw->dma->stop_rx(priv->ioaddr);
|
||||||
|
|
||||||
stmmac_clear_descriptors(priv);
|
|
||||||
|
|
||||||
/* Enable Power down mode by programming the PMT regs */
|
/* Enable Power down mode by programming the PMT regs */
|
||||||
if (device_may_wakeup(priv->device)) {
|
if (device_may_wakeup(priv->device)) {
|
||||||
priv->hw->mac->pmt(priv->hw, priv->wolopts);
|
priv->hw->mac->pmt(priv->hw, priv->wolopts);
|
||||||
|
@ -3105,7 +3103,12 @@ int stmmac_resume(struct net_device *ndev)
|
||||||
|
|
||||||
netif_device_attach(ndev);
|
netif_device_attach(ndev);
|
||||||
|
|
||||||
init_dma_desc_rings(ndev, GFP_ATOMIC);
|
priv->cur_rx = 0;
|
||||||
|
priv->dirty_rx = 0;
|
||||||
|
priv->dirty_tx = 0;
|
||||||
|
priv->cur_tx = 0;
|
||||||
|
stmmac_clear_descriptors(priv);
|
||||||
|
|
||||||
stmmac_hw_setup(ndev, false);
|
stmmac_hw_setup(ndev, false);
|
||||||
stmmac_init_tx_coalesce(priv);
|
stmmac_init_tx_coalesce(priv);
|
||||||
stmmac_set_rx_mode(ndev);
|
stmmac_set_rx_mode(ndev);
|
||||||
|
|
|
@ -967,8 +967,6 @@ static netdev_tx_t geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
|
||||||
err = udp_tunnel6_xmit_skb(dst, gs6->sock->sk, skb, dev,
|
err = udp_tunnel6_xmit_skb(dst, gs6->sock->sk, skb, dev,
|
||||||
&fl6.saddr, &fl6.daddr, prio, ttl,
|
&fl6.saddr, &fl6.daddr, prio, ttl,
|
||||||
sport, geneve->dst_port, !udp_csum);
|
sport, geneve->dst_port, !udp_csum);
|
||||||
|
|
||||||
iptunnel_xmit_stats(err, &dev->stats, dev->tstats);
|
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
|
|
||||||
tx_error:
|
tx_error:
|
||||||
|
|
|
@ -149,9 +149,14 @@ int mdio_mux_init(struct device *dev,
|
||||||
}
|
}
|
||||||
cb->bus_number = v;
|
cb->bus_number = v;
|
||||||
cb->parent = pb;
|
cb->parent = pb;
|
||||||
cb->mii_bus = mdiobus_alloc();
|
|
||||||
cb->mii_bus->priv = cb;
|
|
||||||
|
|
||||||
|
cb->mii_bus = mdiobus_alloc();
|
||||||
|
if (!cb->mii_bus) {
|
||||||
|
ret_val = -ENOMEM;
|
||||||
|
of_node_put(child_bus_node);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
cb->mii_bus->priv = cb;
|
||||||
cb->mii_bus->irq = cb->phy_irq;
|
cb->mii_bus->irq = cb->phy_irq;
|
||||||
cb->mii_bus->name = "mdio_mux";
|
cb->mii_bus->name = "mdio_mux";
|
||||||
snprintf(cb->mii_bus->id, MII_BUS_ID_SIZE, "%x.%x",
|
snprintf(cb->mii_bus->id, MII_BUS_ID_SIZE, "%x.%x",
|
||||||
|
|
|
@ -339,9 +339,18 @@ static int ksz9021_config_init(struct phy_device *phydev)
|
||||||
{
|
{
|
||||||
const struct device *dev = &phydev->dev;
|
const struct device *dev = &phydev->dev;
|
||||||
const struct device_node *of_node = dev->of_node;
|
const struct device_node *of_node = dev->of_node;
|
||||||
|
const struct device *dev_walker;
|
||||||
|
|
||||||
if (!of_node && dev->parent->of_node)
|
/* The Micrel driver has a deprecated option to place phy OF
|
||||||
of_node = dev->parent->of_node;
|
* properties in the MAC node. Walk up the tree of devices to
|
||||||
|
* find a device with an OF node.
|
||||||
|
*/
|
||||||
|
dev_walker = &phydev->dev;
|
||||||
|
do {
|
||||||
|
of_node = dev_walker->of_node;
|
||||||
|
dev_walker = dev_walker->parent;
|
||||||
|
|
||||||
|
} while (!of_node && dev_walker);
|
||||||
|
|
||||||
if (of_node) {
|
if (of_node) {
|
||||||
ksz9021_load_values_from_of(phydev, of_node,
|
ksz9021_load_values_from_of(phydev, of_node,
|
||||||
|
|
|
@ -568,6 +568,9 @@ static int pppoe_create(struct net *net, struct socket *sock, int kern)
|
||||||
sk->sk_family = PF_PPPOX;
|
sk->sk_family = PF_PPPOX;
|
||||||
sk->sk_protocol = PX_PROTO_OE;
|
sk->sk_protocol = PX_PROTO_OE;
|
||||||
|
|
||||||
|
INIT_WORK(&pppox_sk(sk)->proto.pppoe.padt_work,
|
||||||
|
pppoe_unbind_sock_work);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -632,8 +635,6 @@ static int pppoe_connect(struct socket *sock, struct sockaddr *uservaddr,
|
||||||
|
|
||||||
lock_sock(sk);
|
lock_sock(sk);
|
||||||
|
|
||||||
INIT_WORK(&po->proto.pppoe.padt_work, pppoe_unbind_sock_work);
|
|
||||||
|
|
||||||
error = -EINVAL;
|
error = -EINVAL;
|
||||||
if (sp->sa_protocol != PX_PROTO_OE)
|
if (sp->sa_protocol != PX_PROTO_OE)
|
||||||
goto end;
|
goto end;
|
||||||
|
@ -663,8 +664,13 @@ static int pppoe_connect(struct socket *sock, struct sockaddr *uservaddr,
|
||||||
po->pppoe_dev = NULL;
|
po->pppoe_dev = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
memset(sk_pppox(po) + 1, 0,
|
po->pppoe_ifindex = 0;
|
||||||
sizeof(struct pppox_sock) - sizeof(struct sock));
|
memset(&po->pppoe_pa, 0, sizeof(po->pppoe_pa));
|
||||||
|
memset(&po->pppoe_relay, 0, sizeof(po->pppoe_relay));
|
||||||
|
memset(&po->chan, 0, sizeof(po->chan));
|
||||||
|
po->next = NULL;
|
||||||
|
po->num = 0;
|
||||||
|
|
||||||
sk->sk_state = PPPOX_NONE;
|
sk->sk_state = PPPOX_NONE;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -419,6 +419,9 @@ static int pptp_bind(struct socket *sock, struct sockaddr *uservaddr,
|
||||||
struct pptp_opt *opt = &po->proto.pptp;
|
struct pptp_opt *opt = &po->proto.pptp;
|
||||||
int error = 0;
|
int error = 0;
|
||||||
|
|
||||||
|
if (sockaddr_len < sizeof(struct sockaddr_pppox))
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
lock_sock(sk);
|
lock_sock(sk);
|
||||||
|
|
||||||
opt->src_addr = sp->sa_addr.pptp;
|
opt->src_addr = sp->sa_addr.pptp;
|
||||||
|
@ -440,6 +443,9 @@ static int pptp_connect(struct socket *sock, struct sockaddr *uservaddr,
|
||||||
struct flowi4 fl4;
|
struct flowi4 fl4;
|
||||||
int error = 0;
|
int error = 0;
|
||||||
|
|
||||||
|
if (sockaddr_len < sizeof(struct sockaddr_pppox))
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
if (sp->sa_protocol != PX_PROTO_PPTP)
|
if (sp->sa_protocol != PX_PROTO_PPTP)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
|
|
@ -158,7 +158,7 @@ static int cdc_mbim_bind(struct usbnet *dev, struct usb_interface *intf)
|
||||||
if (!cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting))
|
if (!cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting))
|
||||||
goto err;
|
goto err;
|
||||||
|
|
||||||
ret = cdc_ncm_bind_common(dev, intf, data_altsetting, 0);
|
ret = cdc_ncm_bind_common(dev, intf, data_altsetting, dev->driver_info->data);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto err;
|
goto err;
|
||||||
|
|
||||||
|
@ -582,6 +582,26 @@ static const struct driver_info cdc_mbim_info_zlp = {
|
||||||
.tx_fixup = cdc_mbim_tx_fixup,
|
.tx_fixup = cdc_mbim_tx_fixup,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/* The spefication explicitly allows NDPs to be placed anywhere in the
|
||||||
|
* frame, but some devices fail unless the NDP is placed after the IP
|
||||||
|
* packets. Using the CDC_NCM_FLAG_NDP_TO_END flags to force this
|
||||||
|
* behaviour.
|
||||||
|
*
|
||||||
|
* Note: The current implementation of this feature restricts each NTB
|
||||||
|
* to a single NDP, implying that multiplexed sessions cannot share an
|
||||||
|
* NTB. This might affect performace for multiplexed sessions.
|
||||||
|
*/
|
||||||
|
static const struct driver_info cdc_mbim_info_ndp_to_end = {
|
||||||
|
.description = "CDC MBIM",
|
||||||
|
.flags = FLAG_NO_SETINT | FLAG_MULTI_PACKET | FLAG_WWAN,
|
||||||
|
.bind = cdc_mbim_bind,
|
||||||
|
.unbind = cdc_mbim_unbind,
|
||||||
|
.manage_power = cdc_mbim_manage_power,
|
||||||
|
.rx_fixup = cdc_mbim_rx_fixup,
|
||||||
|
.tx_fixup = cdc_mbim_tx_fixup,
|
||||||
|
.data = CDC_NCM_FLAG_NDP_TO_END,
|
||||||
|
};
|
||||||
|
|
||||||
static const struct usb_device_id mbim_devs[] = {
|
static const struct usb_device_id mbim_devs[] = {
|
||||||
/* This duplicate NCM entry is intentional. MBIM devices can
|
/* This duplicate NCM entry is intentional. MBIM devices can
|
||||||
* be disguised as NCM by default, and this is necessary to
|
* be disguised as NCM by default, and this is necessary to
|
||||||
|
@ -597,6 +617,10 @@ static const struct usb_device_id mbim_devs[] = {
|
||||||
{ USB_VENDOR_AND_INTERFACE_INFO(0x0bdb, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
|
{ USB_VENDOR_AND_INTERFACE_INFO(0x0bdb, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
|
||||||
.driver_info = (unsigned long)&cdc_mbim_info,
|
.driver_info = (unsigned long)&cdc_mbim_info,
|
||||||
},
|
},
|
||||||
|
/* Huawei E3372 fails unless NDP comes after the IP packets */
|
||||||
|
{ USB_DEVICE_AND_INTERFACE_INFO(0x12d1, 0x157d, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
|
||||||
|
.driver_info = (unsigned long)&cdc_mbim_info_ndp_to_end,
|
||||||
|
},
|
||||||
/* default entry */
|
/* default entry */
|
||||||
{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
|
{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
|
||||||
.driver_info = (unsigned long)&cdc_mbim_info_zlp,
|
.driver_info = (unsigned long)&cdc_mbim_info_zlp,
|
||||||
|
|
|
@ -955,10 +955,18 @@ static struct usb_cdc_ncm_ndp16 *cdc_ncm_ndp(struct cdc_ncm_ctx *ctx, struct sk_
|
||||||
* NTH16 header as we would normally do. NDP isn't written to the SKB yet, and
|
* NTH16 header as we would normally do. NDP isn't written to the SKB yet, and
|
||||||
* the wNdpIndex field in the header is actually not consistent with reality. It will be later.
|
* the wNdpIndex field in the header is actually not consistent with reality. It will be later.
|
||||||
*/
|
*/
|
||||||
if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END)
|
if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) {
|
||||||
if (ctx->delayed_ndp16->dwSignature == sign)
|
if (ctx->delayed_ndp16->dwSignature == sign)
|
||||||
return ctx->delayed_ndp16;
|
return ctx->delayed_ndp16;
|
||||||
|
|
||||||
|
/* We can only push a single NDP to the end. Return
|
||||||
|
* NULL to send what we've already got and queue this
|
||||||
|
* skb for later.
|
||||||
|
*/
|
||||||
|
else if (ctx->delayed_ndp16->dwSignature)
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
/* follow the chain of NDPs, looking for a match */
|
/* follow the chain of NDPs, looking for a match */
|
||||||
while (ndpoffset) {
|
while (ndpoffset) {
|
||||||
ndp16 = (struct usb_cdc_ncm_ndp16 *)(skb->data + ndpoffset);
|
ndp16 = (struct usb_cdc_ncm_ndp16 *)(skb->data + ndpoffset);
|
||||||
|
|
|
@ -3067,17 +3067,6 @@ static int rtl8152_open(struct net_device *netdev)
|
||||||
|
|
||||||
mutex_lock(&tp->control);
|
mutex_lock(&tp->control);
|
||||||
|
|
||||||
/* The WORK_ENABLE may be set when autoresume occurs */
|
|
||||||
if (test_bit(WORK_ENABLE, &tp->flags)) {
|
|
||||||
clear_bit(WORK_ENABLE, &tp->flags);
|
|
||||||
usb_kill_urb(tp->intr_urb);
|
|
||||||
cancel_delayed_work_sync(&tp->schedule);
|
|
||||||
|
|
||||||
/* disable the tx/rx, if the workqueue has enabled them. */
|
|
||||||
if (netif_carrier_ok(netdev))
|
|
||||||
tp->rtl_ops.disable(tp);
|
|
||||||
}
|
|
||||||
|
|
||||||
tp->rtl_ops.up(tp);
|
tp->rtl_ops.up(tp);
|
||||||
|
|
||||||
rtl8152_set_speed(tp, AUTONEG_ENABLE,
|
rtl8152_set_speed(tp, AUTONEG_ENABLE,
|
||||||
|
@ -3124,12 +3113,6 @@ static int rtl8152_close(struct net_device *netdev)
|
||||||
} else {
|
} else {
|
||||||
mutex_lock(&tp->control);
|
mutex_lock(&tp->control);
|
||||||
|
|
||||||
/* The autosuspend may have been enabled and wouldn't
|
|
||||||
* be disable when autoresume occurs, because the
|
|
||||||
* netif_running() would be false.
|
|
||||||
*/
|
|
||||||
rtl_runtime_suspend_enable(tp, false);
|
|
||||||
|
|
||||||
tp->rtl_ops.down(tp);
|
tp->rtl_ops.down(tp);
|
||||||
|
|
||||||
mutex_unlock(&tp->control);
|
mutex_unlock(&tp->control);
|
||||||
|
@ -3512,7 +3495,7 @@ static int rtl8152_resume(struct usb_interface *intf)
|
||||||
netif_device_attach(tp->netdev);
|
netif_device_attach(tp->netdev);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (netif_running(tp->netdev)) {
|
if (netif_running(tp->netdev) && tp->netdev->flags & IFF_UP) {
|
||||||
if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {
|
if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {
|
||||||
rtl_runtime_suspend_enable(tp, false);
|
rtl_runtime_suspend_enable(tp, false);
|
||||||
clear_bit(SELECTIVE_SUSPEND, &tp->flags);
|
clear_bit(SELECTIVE_SUSPEND, &tp->flags);
|
||||||
|
@ -3532,6 +3515,8 @@ static int rtl8152_resume(struct usb_interface *intf)
|
||||||
}
|
}
|
||||||
usb_submit_urb(tp->intr_urb, GFP_KERNEL);
|
usb_submit_urb(tp->intr_urb, GFP_KERNEL);
|
||||||
} else if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {
|
} else if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {
|
||||||
|
if (tp->netdev->flags & IFF_UP)
|
||||||
|
rtl_runtime_suspend_enable(tp, false);
|
||||||
clear_bit(SELECTIVE_SUSPEND, &tp->flags);
|
clear_bit(SELECTIVE_SUSPEND, &tp->flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1158,7 +1158,6 @@ static void vxlan_rcv(struct vxlan_sock *vs, struct sk_buff *skb,
|
||||||
struct pcpu_sw_netstats *stats;
|
struct pcpu_sw_netstats *stats;
|
||||||
union vxlan_addr saddr;
|
union vxlan_addr saddr;
|
||||||
int err = 0;
|
int err = 0;
|
||||||
union vxlan_addr *remote_ip;
|
|
||||||
|
|
||||||
/* For flow based devices, map all packets to VNI 0 */
|
/* For flow based devices, map all packets to VNI 0 */
|
||||||
if (vs->flags & VXLAN_F_COLLECT_METADATA)
|
if (vs->flags & VXLAN_F_COLLECT_METADATA)
|
||||||
|
@ -1169,7 +1168,6 @@ static void vxlan_rcv(struct vxlan_sock *vs, struct sk_buff *skb,
|
||||||
if (!vxlan)
|
if (!vxlan)
|
||||||
goto drop;
|
goto drop;
|
||||||
|
|
||||||
remote_ip = &vxlan->default_dst.remote_ip;
|
|
||||||
skb_reset_mac_header(skb);
|
skb_reset_mac_header(skb);
|
||||||
skb_scrub_packet(skb, !net_eq(vxlan->net, dev_net(vxlan->dev)));
|
skb_scrub_packet(skb, !net_eq(vxlan->net, dev_net(vxlan->dev)));
|
||||||
skb->protocol = eth_type_trans(skb, vxlan->dev);
|
skb->protocol = eth_type_trans(skb, vxlan->dev);
|
||||||
|
@ -1179,8 +1177,8 @@ static void vxlan_rcv(struct vxlan_sock *vs, struct sk_buff *skb,
|
||||||
if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr))
|
if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr))
|
||||||
goto drop;
|
goto drop;
|
||||||
|
|
||||||
/* Re-examine inner Ethernet packet */
|
/* Get data from the outer IP header */
|
||||||
if (remote_ip->sa.sa_family == AF_INET) {
|
if (vxlan_get_sk_family(vs) == AF_INET) {
|
||||||
oip = ip_hdr(skb);
|
oip = ip_hdr(skb);
|
||||||
saddr.sin.sin_addr.s_addr = oip->saddr;
|
saddr.sin.sin_addr.s_addr = oip->saddr;
|
||||||
saddr.sa.sa_family = AF_INET;
|
saddr.sa.sa_family = AF_INET;
|
||||||
|
@ -1848,6 +1846,34 @@ static int vxlan_xmit_skb(struct rtable *rt, struct sock *sk, struct sk_buff *sk
|
||||||
!(vxflags & VXLAN_F_UDP_CSUM));
|
!(vxflags & VXLAN_F_UDP_CSUM));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#if IS_ENABLED(CONFIG_IPV6)
|
||||||
|
static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan,
|
||||||
|
struct sk_buff *skb, int oif,
|
||||||
|
const struct in6_addr *daddr,
|
||||||
|
struct in6_addr *saddr)
|
||||||
|
{
|
||||||
|
struct dst_entry *ndst;
|
||||||
|
struct flowi6 fl6;
|
||||||
|
int err;
|
||||||
|
|
||||||
|
memset(&fl6, 0, sizeof(fl6));
|
||||||
|
fl6.flowi6_oif = oif;
|
||||||
|
fl6.daddr = *daddr;
|
||||||
|
fl6.saddr = vxlan->cfg.saddr.sin6.sin6_addr;
|
||||||
|
fl6.flowi6_mark = skb->mark;
|
||||||
|
fl6.flowi6_proto = IPPROTO_UDP;
|
||||||
|
|
||||||
|
err = ipv6_stub->ipv6_dst_lookup(vxlan->net,
|
||||||
|
vxlan->vn6_sock->sock->sk,
|
||||||
|
&ndst, &fl6);
|
||||||
|
if (err < 0)
|
||||||
|
return ERR_PTR(err);
|
||||||
|
|
||||||
|
*saddr = fl6.saddr;
|
||||||
|
return ndst;
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
/* Bypass encapsulation if the destination is local */
|
/* Bypass encapsulation if the destination is local */
|
||||||
static void vxlan_encap_bypass(struct sk_buff *skb, struct vxlan_dev *src_vxlan,
|
static void vxlan_encap_bypass(struct sk_buff *skb, struct vxlan_dev *src_vxlan,
|
||||||
struct vxlan_dev *dst_vxlan)
|
struct vxlan_dev *dst_vxlan)
|
||||||
|
@ -2035,21 +2061,17 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
|
||||||
#if IS_ENABLED(CONFIG_IPV6)
|
#if IS_ENABLED(CONFIG_IPV6)
|
||||||
} else {
|
} else {
|
||||||
struct dst_entry *ndst;
|
struct dst_entry *ndst;
|
||||||
struct flowi6 fl6;
|
struct in6_addr saddr;
|
||||||
u32 rt6i_flags;
|
u32 rt6i_flags;
|
||||||
|
|
||||||
if (!vxlan->vn6_sock)
|
if (!vxlan->vn6_sock)
|
||||||
goto drop;
|
goto drop;
|
||||||
sk = vxlan->vn6_sock->sock->sk;
|
sk = vxlan->vn6_sock->sock->sk;
|
||||||
|
|
||||||
memset(&fl6, 0, sizeof(fl6));
|
ndst = vxlan6_get_route(vxlan, skb,
|
||||||
fl6.flowi6_oif = rdst ? rdst->remote_ifindex : 0;
|
rdst ? rdst->remote_ifindex : 0,
|
||||||
fl6.daddr = dst->sin6.sin6_addr;
|
&dst->sin6.sin6_addr, &saddr);
|
||||||
fl6.saddr = vxlan->cfg.saddr.sin6.sin6_addr;
|
if (IS_ERR(ndst)) {
|
||||||
fl6.flowi6_mark = skb->mark;
|
|
||||||
fl6.flowi6_proto = IPPROTO_UDP;
|
|
||||||
|
|
||||||
if (ipv6_stub->ipv6_dst_lookup(vxlan->net, sk, &ndst, &fl6)) {
|
|
||||||
netdev_dbg(dev, "no route to %pI6\n",
|
netdev_dbg(dev, "no route to %pI6\n",
|
||||||
&dst->sin6.sin6_addr);
|
&dst->sin6.sin6_addr);
|
||||||
dev->stats.tx_carrier_errors++;
|
dev->stats.tx_carrier_errors++;
|
||||||
|
@ -2081,7 +2103,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
|
||||||
}
|
}
|
||||||
|
|
||||||
ttl = ttl ? : ip6_dst_hoplimit(ndst);
|
ttl = ttl ? : ip6_dst_hoplimit(ndst);
|
||||||
err = vxlan6_xmit_skb(ndst, sk, skb, dev, &fl6.saddr, &fl6.daddr,
|
err = vxlan6_xmit_skb(ndst, sk, skb, dev, &saddr, &dst->sin6.sin6_addr,
|
||||||
0, ttl, src_port, dst_port, htonl(vni << 8), md,
|
0, ttl, src_port, dst_port, htonl(vni << 8), md,
|
||||||
!net_eq(vxlan->net, dev_net(vxlan->dev)),
|
!net_eq(vxlan->net, dev_net(vxlan->dev)),
|
||||||
flags);
|
flags);
|
||||||
|
@ -2395,9 +2417,30 @@ static int vxlan_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
|
||||||
vxlan->cfg.port_max, true);
|
vxlan->cfg.port_max, true);
|
||||||
dport = info->key.tp_dst ? : vxlan->cfg.dst_port;
|
dport = info->key.tp_dst ? : vxlan->cfg.dst_port;
|
||||||
|
|
||||||
if (ip_tunnel_info_af(info) == AF_INET)
|
if (ip_tunnel_info_af(info) == AF_INET) {
|
||||||
|
if (!vxlan->vn4_sock)
|
||||||
|
return -EINVAL;
|
||||||
return egress_ipv4_tun_info(dev, skb, info, sport, dport);
|
return egress_ipv4_tun_info(dev, skb, info, sport, dport);
|
||||||
return -EINVAL;
|
} else {
|
||||||
|
#if IS_ENABLED(CONFIG_IPV6)
|
||||||
|
struct dst_entry *ndst;
|
||||||
|
|
||||||
|
if (!vxlan->vn6_sock)
|
||||||
|
return -EINVAL;
|
||||||
|
ndst = vxlan6_get_route(vxlan, skb, 0,
|
||||||
|
&info->key.u.ipv6.dst,
|
||||||
|
&info->key.u.ipv6.src);
|
||||||
|
if (IS_ERR(ndst))
|
||||||
|
return PTR_ERR(ndst);
|
||||||
|
dst_release(ndst);
|
||||||
|
|
||||||
|
info->key.tp_src = sport;
|
||||||
|
info->key.tp_dst = dport;
|
||||||
|
#else /* !CONFIG_IPV6 */
|
||||||
|
return -EPFNOSUPPORT;
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static const struct net_device_ops vxlan_netdev_ops = {
|
static const struct net_device_ops vxlan_netdev_ops = {
|
||||||
|
|
|
@ -2084,7 +2084,7 @@ struct pcpu_sw_netstats {
|
||||||
})
|
})
|
||||||
|
|
||||||
#define netdev_alloc_pcpu_stats(type) \
|
#define netdev_alloc_pcpu_stats(type) \
|
||||||
__netdev_alloc_pcpu_stats(type, GFP_KERNEL);
|
__netdev_alloc_pcpu_stats(type, GFP_KERNEL)
|
||||||
|
|
||||||
#include <linux/notifier.h>
|
#include <linux/notifier.h>
|
||||||
|
|
||||||
|
|
|
@ -14,7 +14,7 @@ struct nfnl_callback {
|
||||||
int (*call_rcu)(struct sock *nl, struct sk_buff *skb,
|
int (*call_rcu)(struct sock *nl, struct sk_buff *skb,
|
||||||
const struct nlmsghdr *nlh,
|
const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const cda[]);
|
const struct nlattr * const cda[]);
|
||||||
int (*call_batch)(struct sock *nl, struct sk_buff *skb,
|
int (*call_batch)(struct net *net, struct sock *nl, struct sk_buff *skb,
|
||||||
const struct nlmsghdr *nlh,
|
const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const cda[]);
|
const struct nlattr * const cda[]);
|
||||||
const struct nla_policy *policy; /* netlink attribute policy */
|
const struct nla_policy *policy; /* netlink attribute policy */
|
||||||
|
|
|
@ -9,6 +9,8 @@
|
||||||
#ifndef __COMMON_HSI__
|
#ifndef __COMMON_HSI__
|
||||||
#define __COMMON_HSI__
|
#define __COMMON_HSI__
|
||||||
|
|
||||||
|
#define CORE_SPQE_PAGE_SIZE_BYTES 4096
|
||||||
|
|
||||||
#define FW_MAJOR_VERSION 8
|
#define FW_MAJOR_VERSION 8
|
||||||
#define FW_MINOR_VERSION 4
|
#define FW_MINOR_VERSION 4
|
||||||
#define FW_REVISION_VERSION 2
|
#define FW_REVISION_VERSION 2
|
||||||
|
|
|
@ -111,7 +111,8 @@ static inline u16 qed_chain_get_elem_left(struct qed_chain *p_chain)
|
||||||
used = ((u32)0x10000u + (u32)(p_chain->prod_idx)) -
|
used = ((u32)0x10000u + (u32)(p_chain->prod_idx)) -
|
||||||
(u32)p_chain->cons_idx;
|
(u32)p_chain->cons_idx;
|
||||||
if (p_chain->mode == QED_CHAIN_MODE_NEXT_PTR)
|
if (p_chain->mode == QED_CHAIN_MODE_NEXT_PTR)
|
||||||
used -= (used / p_chain->elem_per_page);
|
used -= p_chain->prod_idx / p_chain->elem_per_page -
|
||||||
|
p_chain->cons_idx / p_chain->elem_per_page;
|
||||||
|
|
||||||
return p_chain->capacity - used;
|
return p_chain->capacity - used;
|
||||||
}
|
}
|
||||||
|
|
|
@ -19,6 +19,7 @@
|
||||||
|
|
||||||
#include <linux/atomic.h>
|
#include <linux/atomic.h>
|
||||||
#include <linux/compiler.h>
|
#include <linux/compiler.h>
|
||||||
|
#include <linux/err.h>
|
||||||
#include <linux/errno.h>
|
#include <linux/errno.h>
|
||||||
#include <linux/jhash.h>
|
#include <linux/jhash.h>
|
||||||
#include <linux/list_nulls.h>
|
#include <linux/list_nulls.h>
|
||||||
|
@ -339,10 +340,11 @@ static inline int lockdep_rht_bucket_is_held(const struct bucket_table *tbl,
|
||||||
int rhashtable_init(struct rhashtable *ht,
|
int rhashtable_init(struct rhashtable *ht,
|
||||||
const struct rhashtable_params *params);
|
const struct rhashtable_params *params);
|
||||||
|
|
||||||
int rhashtable_insert_slow(struct rhashtable *ht, const void *key,
|
struct bucket_table *rhashtable_insert_slow(struct rhashtable *ht,
|
||||||
struct rhash_head *obj,
|
const void *key,
|
||||||
struct bucket_table *old_tbl);
|
struct rhash_head *obj,
|
||||||
int rhashtable_insert_rehash(struct rhashtable *ht);
|
struct bucket_table *old_tbl);
|
||||||
|
int rhashtable_insert_rehash(struct rhashtable *ht, struct bucket_table *tbl);
|
||||||
|
|
||||||
int rhashtable_walk_init(struct rhashtable *ht, struct rhashtable_iter *iter);
|
int rhashtable_walk_init(struct rhashtable *ht, struct rhashtable_iter *iter);
|
||||||
void rhashtable_walk_exit(struct rhashtable_iter *iter);
|
void rhashtable_walk_exit(struct rhashtable_iter *iter);
|
||||||
|
@ -598,9 +600,11 @@ static inline int __rhashtable_insert_fast(
|
||||||
|
|
||||||
new_tbl = rht_dereference_rcu(tbl->future_tbl, ht);
|
new_tbl = rht_dereference_rcu(tbl->future_tbl, ht);
|
||||||
if (unlikely(new_tbl)) {
|
if (unlikely(new_tbl)) {
|
||||||
err = rhashtable_insert_slow(ht, key, obj, new_tbl);
|
tbl = rhashtable_insert_slow(ht, key, obj, new_tbl);
|
||||||
if (err == -EAGAIN)
|
if (!IS_ERR_OR_NULL(tbl))
|
||||||
goto slow_path;
|
goto slow_path;
|
||||||
|
|
||||||
|
err = PTR_ERR(tbl);
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -611,7 +615,7 @@ static inline int __rhashtable_insert_fast(
|
||||||
if (unlikely(rht_grow_above_100(ht, tbl))) {
|
if (unlikely(rht_grow_above_100(ht, tbl))) {
|
||||||
slow_path:
|
slow_path:
|
||||||
spin_unlock_bh(lock);
|
spin_unlock_bh(lock);
|
||||||
err = rhashtable_insert_rehash(ht);
|
err = rhashtable_insert_rehash(ht, tbl);
|
||||||
rcu_read_unlock();
|
rcu_read_unlock();
|
||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
|
|
|
@ -322,6 +322,39 @@ static inline void skb_dst_force(struct sk_buff *skb)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* dst_hold_safe - Take a reference on a dst if possible
|
||||||
|
* @dst: pointer to dst entry
|
||||||
|
*
|
||||||
|
* This helper returns false if it could not safely
|
||||||
|
* take a reference on a dst.
|
||||||
|
*/
|
||||||
|
static inline bool dst_hold_safe(struct dst_entry *dst)
|
||||||
|
{
|
||||||
|
if (dst->flags & DST_NOCACHE)
|
||||||
|
return atomic_inc_not_zero(&dst->__refcnt);
|
||||||
|
dst_hold(dst);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* skb_dst_force_safe - makes sure skb dst is refcounted
|
||||||
|
* @skb: buffer
|
||||||
|
*
|
||||||
|
* If dst is not yet refcounted and not destroyed, grab a ref on it.
|
||||||
|
*/
|
||||||
|
static inline void skb_dst_force_safe(struct sk_buff *skb)
|
||||||
|
{
|
||||||
|
if (skb_dst_is_noref(skb)) {
|
||||||
|
struct dst_entry *dst = skb_dst(skb);
|
||||||
|
|
||||||
|
if (!dst_hold_safe(dst))
|
||||||
|
dst = NULL;
|
||||||
|
|
||||||
|
skb->_skb_refdst = (unsigned long)dst;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* __skb_tunnel_rx - prepare skb for rx reinsert
|
* __skb_tunnel_rx - prepare skb for rx reinsert
|
||||||
|
|
|
@ -210,18 +210,37 @@ struct inet_sock {
|
||||||
#define IP_CMSG_ORIGDSTADDR BIT(6)
|
#define IP_CMSG_ORIGDSTADDR BIT(6)
|
||||||
#define IP_CMSG_CHECKSUM BIT(7)
|
#define IP_CMSG_CHECKSUM BIT(7)
|
||||||
|
|
||||||
/* SYNACK messages might be attached to request sockets.
|
/**
|
||||||
|
* sk_to_full_sk - Access to a full socket
|
||||||
|
* @sk: pointer to a socket
|
||||||
|
*
|
||||||
|
* SYNACK messages might be attached to request sockets.
|
||||||
* Some places want to reach the listener in this case.
|
* Some places want to reach the listener in this case.
|
||||||
*/
|
*/
|
||||||
static inline struct sock *skb_to_full_sk(const struct sk_buff *skb)
|
static inline struct sock *sk_to_full_sk(struct sock *sk)
|
||||||
{
|
{
|
||||||
struct sock *sk = skb->sk;
|
#ifdef CONFIG_INET
|
||||||
|
|
||||||
if (sk && sk->sk_state == TCP_NEW_SYN_RECV)
|
if (sk && sk->sk_state == TCP_NEW_SYN_RECV)
|
||||||
sk = inet_reqsk(sk)->rsk_listener;
|
sk = inet_reqsk(sk)->rsk_listener;
|
||||||
|
#endif
|
||||||
return sk;
|
return sk;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* sk_to_full_sk() variant with a const argument */
|
||||||
|
static inline const struct sock *sk_const_to_full_sk(const struct sock *sk)
|
||||||
|
{
|
||||||
|
#ifdef CONFIG_INET
|
||||||
|
if (sk && sk->sk_state == TCP_NEW_SYN_RECV)
|
||||||
|
sk = ((const struct request_sock *)sk)->rsk_listener;
|
||||||
|
#endif
|
||||||
|
return sk;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline struct sock *skb_to_full_sk(const struct sk_buff *skb)
|
||||||
|
{
|
||||||
|
return sk_to_full_sk(skb->sk);
|
||||||
|
}
|
||||||
|
|
||||||
static inline struct inet_sock *inet_sk(const struct sock *sk)
|
static inline struct inet_sock *inet_sk(const struct sock *sk)
|
||||||
{
|
{
|
||||||
return (struct inet_sock *)sk;
|
return (struct inet_sock *)sk;
|
||||||
|
|
|
@ -78,6 +78,7 @@ void inet_initpeers(void) __init;
|
||||||
static inline void inetpeer_set_addr_v4(struct inetpeer_addr *iaddr, __be32 ip)
|
static inline void inetpeer_set_addr_v4(struct inetpeer_addr *iaddr, __be32 ip)
|
||||||
{
|
{
|
||||||
iaddr->a4.addr = ip;
|
iaddr->a4.addr = ip;
|
||||||
|
iaddr->a4.vif = 0;
|
||||||
iaddr->family = AF_INET;
|
iaddr->family = AF_INET;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1493,7 +1493,8 @@ struct sctp_association {
|
||||||
* : SACK's are not delayed (see Section 6).
|
* : SACK's are not delayed (see Section 6).
|
||||||
*/
|
*/
|
||||||
__u8 sack_needed:1, /* Do we need to sack the peer? */
|
__u8 sack_needed:1, /* Do we need to sack the peer? */
|
||||||
sack_generation:1;
|
sack_generation:1,
|
||||||
|
zero_window_announced:1;
|
||||||
__u32 sack_cnt;
|
__u32 sack_cnt;
|
||||||
|
|
||||||
__u32 adaptation_ind; /* Adaptation Code point. */
|
__u32 adaptation_ind; /* Adaptation Code point. */
|
||||||
|
|
|
@ -388,7 +388,7 @@ struct sock {
|
||||||
struct socket_wq *sk_wq_raw;
|
struct socket_wq *sk_wq_raw;
|
||||||
};
|
};
|
||||||
#ifdef CONFIG_XFRM
|
#ifdef CONFIG_XFRM
|
||||||
struct xfrm_policy *sk_policy[2];
|
struct xfrm_policy __rcu *sk_policy[2];
|
||||||
#endif
|
#endif
|
||||||
struct dst_entry *sk_rx_dst;
|
struct dst_entry *sk_rx_dst;
|
||||||
struct dst_entry __rcu *sk_dst_cache;
|
struct dst_entry __rcu *sk_dst_cache;
|
||||||
|
@ -404,6 +404,7 @@ struct sock {
|
||||||
sk_userlocks : 4,
|
sk_userlocks : 4,
|
||||||
sk_protocol : 8,
|
sk_protocol : 8,
|
||||||
sk_type : 16;
|
sk_type : 16;
|
||||||
|
#define SK_PROTOCOL_MAX U8_MAX
|
||||||
kmemcheck_bitfield_end(flags);
|
kmemcheck_bitfield_end(flags);
|
||||||
int sk_wmem_queued;
|
int sk_wmem_queued;
|
||||||
gfp_t sk_allocation;
|
gfp_t sk_allocation;
|
||||||
|
@ -740,6 +741,8 @@ enum sock_flags {
|
||||||
SOCK_SELECT_ERR_QUEUE, /* Wake select on error queue */
|
SOCK_SELECT_ERR_QUEUE, /* Wake select on error queue */
|
||||||
};
|
};
|
||||||
|
|
||||||
|
#define SK_FLAGS_TIMESTAMP ((1UL << SOCK_TIMESTAMP) | (1UL << SOCK_TIMESTAMPING_RX_SOFTWARE))
|
||||||
|
|
||||||
static inline void sock_copy_flags(struct sock *nsk, struct sock *osk)
|
static inline void sock_copy_flags(struct sock *nsk, struct sock *osk)
|
||||||
{
|
{
|
||||||
nsk->sk_flags = osk->sk_flags;
|
nsk->sk_flags = osk->sk_flags;
|
||||||
|
@ -814,7 +817,7 @@ void sk_stream_write_space(struct sock *sk);
|
||||||
static inline void __sk_add_backlog(struct sock *sk, struct sk_buff *skb)
|
static inline void __sk_add_backlog(struct sock *sk, struct sk_buff *skb)
|
||||||
{
|
{
|
||||||
/* dont let skb dst not refcounted, we are going to leave rcu lock */
|
/* dont let skb dst not refcounted, we are going to leave rcu lock */
|
||||||
skb_dst_force(skb);
|
skb_dst_force_safe(skb);
|
||||||
|
|
||||||
if (!sk->sk_backlog.tail)
|
if (!sk->sk_backlog.tail)
|
||||||
sk->sk_backlog.head = skb;
|
sk->sk_backlog.head = skb;
|
||||||
|
|
|
@ -79,7 +79,7 @@ struct vxlanhdr {
|
||||||
};
|
};
|
||||||
|
|
||||||
/* VXLAN header flags. */
|
/* VXLAN header flags. */
|
||||||
#define VXLAN_HF_RCO BIT(24)
|
#define VXLAN_HF_RCO BIT(21)
|
||||||
#define VXLAN_HF_VNI BIT(27)
|
#define VXLAN_HF_VNI BIT(27)
|
||||||
#define VXLAN_HF_GBP BIT(31)
|
#define VXLAN_HF_GBP BIT(31)
|
||||||
|
|
||||||
|
|
|
@ -548,6 +548,7 @@ struct xfrm_policy {
|
||||||
u16 family;
|
u16 family;
|
||||||
struct xfrm_sec_ctx *security;
|
struct xfrm_sec_ctx *security;
|
||||||
struct xfrm_tmpl xfrm_vec[XFRM_MAX_DEPTH];
|
struct xfrm_tmpl xfrm_vec[XFRM_MAX_DEPTH];
|
||||||
|
struct rcu_head rcu;
|
||||||
};
|
};
|
||||||
|
|
||||||
static inline struct net *xp_net(const struct xfrm_policy *xp)
|
static inline struct net *xp_net(const struct xfrm_policy *xp)
|
||||||
|
@ -1141,12 +1142,14 @@ static inline int xfrm6_route_forward(struct sk_buff *skb)
|
||||||
return xfrm_route_forward(skb, AF_INET6);
|
return xfrm_route_forward(skb, AF_INET6);
|
||||||
}
|
}
|
||||||
|
|
||||||
int __xfrm_sk_clone_policy(struct sock *sk);
|
int __xfrm_sk_clone_policy(struct sock *sk, const struct sock *osk);
|
||||||
|
|
||||||
static inline int xfrm_sk_clone_policy(struct sock *sk)
|
static inline int xfrm_sk_clone_policy(struct sock *sk, const struct sock *osk)
|
||||||
{
|
{
|
||||||
if (unlikely(sk->sk_policy[0] || sk->sk_policy[1]))
|
sk->sk_policy[0] = NULL;
|
||||||
return __xfrm_sk_clone_policy(sk);
|
sk->sk_policy[1] = NULL;
|
||||||
|
if (unlikely(osk->sk_policy[0] || osk->sk_policy[1]))
|
||||||
|
return __xfrm_sk_clone_policy(sk, osk);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1154,12 +1157,16 @@ int xfrm_policy_delete(struct xfrm_policy *pol, int dir);
|
||||||
|
|
||||||
static inline void xfrm_sk_free_policy(struct sock *sk)
|
static inline void xfrm_sk_free_policy(struct sock *sk)
|
||||||
{
|
{
|
||||||
if (unlikely(sk->sk_policy[0] != NULL)) {
|
struct xfrm_policy *pol;
|
||||||
xfrm_policy_delete(sk->sk_policy[0], XFRM_POLICY_MAX);
|
|
||||||
|
pol = rcu_dereference_protected(sk->sk_policy[0], 1);
|
||||||
|
if (unlikely(pol != NULL)) {
|
||||||
|
xfrm_policy_delete(pol, XFRM_POLICY_MAX);
|
||||||
sk->sk_policy[0] = NULL;
|
sk->sk_policy[0] = NULL;
|
||||||
}
|
}
|
||||||
if (unlikely(sk->sk_policy[1] != NULL)) {
|
pol = rcu_dereference_protected(sk->sk_policy[1], 1);
|
||||||
xfrm_policy_delete(sk->sk_policy[1], XFRM_POLICY_MAX+1);
|
if (unlikely(pol != NULL)) {
|
||||||
|
xfrm_policy_delete(pol, XFRM_POLICY_MAX+1);
|
||||||
sk->sk_policy[1] = NULL;
|
sk->sk_policy[1] = NULL;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1169,7 +1176,7 @@ void xfrm_garbage_collect(struct net *net);
|
||||||
#else
|
#else
|
||||||
|
|
||||||
static inline void xfrm_sk_free_policy(struct sock *sk) {}
|
static inline void xfrm_sk_free_policy(struct sock *sk) {}
|
||||||
static inline int xfrm_sk_clone_policy(struct sock *sk) { return 0; }
|
static inline int xfrm_sk_clone_policy(struct sock *sk, const struct sock *osk) { return 0; }
|
||||||
static inline int xfrm6_route_forward(struct sk_buff *skb) { return 1; }
|
static inline int xfrm6_route_forward(struct sk_buff *skb) { return 1; }
|
||||||
static inline int xfrm4_route_forward(struct sk_buff *skb) { return 1; }
|
static inline int xfrm4_route_forward(struct sk_buff *skb) { return 1; }
|
||||||
static inline int xfrm6_policy_check(struct sock *sk, int dir, struct sk_buff *skb)
|
static inline int xfrm6_policy_check(struct sock *sk, int dir, struct sk_buff *skb)
|
||||||
|
|
|
@ -186,6 +186,7 @@ header-y += if_tunnel.h
|
||||||
header-y += if_vlan.h
|
header-y += if_vlan.h
|
||||||
header-y += if_x25.h
|
header-y += if_x25.h
|
||||||
header-y += igmp.h
|
header-y += igmp.h
|
||||||
|
header-y += ila.h
|
||||||
header-y += in6.h
|
header-y += in6.h
|
||||||
header-y += inet_diag.h
|
header-y += inet_diag.h
|
||||||
header-y += in.h
|
header-y += in.h
|
||||||
|
|
|
@ -628,7 +628,7 @@ struct ovs_action_hash {
|
||||||
* @OVS_CT_ATTR_MARK: u32 value followed by u32 mask. For each bit set in the
|
* @OVS_CT_ATTR_MARK: u32 value followed by u32 mask. For each bit set in the
|
||||||
* mask, the corresponding bit in the value is copied to the connection
|
* mask, the corresponding bit in the value is copied to the connection
|
||||||
* tracking mark field in the connection.
|
* tracking mark field in the connection.
|
||||||
* @OVS_CT_ATTR_LABEL: %OVS_CT_LABELS_LEN value followed by %OVS_CT_LABELS_LEN
|
* @OVS_CT_ATTR_LABELS: %OVS_CT_LABELS_LEN value followed by %OVS_CT_LABELS_LEN
|
||||||
* mask. For each bit set in the mask, the corresponding bit in the value is
|
* mask. For each bit set in the mask, the corresponding bit in the value is
|
||||||
* copied to the connection tracking label field in the connection.
|
* copied to the connection tracking label field in the connection.
|
||||||
* @OVS_CT_ATTR_HELPER: variable length string defining conntrack ALG.
|
* @OVS_CT_ATTR_HELPER: variable length string defining conntrack ALG.
|
||||||
|
|
|
@ -389,33 +389,31 @@ static bool rhashtable_check_elasticity(struct rhashtable *ht,
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
int rhashtable_insert_rehash(struct rhashtable *ht)
|
int rhashtable_insert_rehash(struct rhashtable *ht,
|
||||||
|
struct bucket_table *tbl)
|
||||||
{
|
{
|
||||||
struct bucket_table *old_tbl;
|
struct bucket_table *old_tbl;
|
||||||
struct bucket_table *new_tbl;
|
struct bucket_table *new_tbl;
|
||||||
struct bucket_table *tbl;
|
|
||||||
unsigned int size;
|
unsigned int size;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
old_tbl = rht_dereference_rcu(ht->tbl, ht);
|
old_tbl = rht_dereference_rcu(ht->tbl, ht);
|
||||||
tbl = rhashtable_last_table(ht, old_tbl);
|
|
||||||
|
|
||||||
size = tbl->size;
|
size = tbl->size;
|
||||||
|
|
||||||
|
err = -EBUSY;
|
||||||
|
|
||||||
if (rht_grow_above_75(ht, tbl))
|
if (rht_grow_above_75(ht, tbl))
|
||||||
size *= 2;
|
size *= 2;
|
||||||
/* Do not schedule more than one rehash */
|
/* Do not schedule more than one rehash */
|
||||||
else if (old_tbl != tbl)
|
else if (old_tbl != tbl)
|
||||||
return -EBUSY;
|
goto fail;
|
||||||
|
|
||||||
|
err = -ENOMEM;
|
||||||
|
|
||||||
new_tbl = bucket_table_alloc(ht, size, GFP_ATOMIC);
|
new_tbl = bucket_table_alloc(ht, size, GFP_ATOMIC);
|
||||||
if (new_tbl == NULL) {
|
if (new_tbl == NULL)
|
||||||
/* Schedule async resize/rehash to try allocation
|
goto fail;
|
||||||
* non-atomic context.
|
|
||||||
*/
|
|
||||||
schedule_work(&ht->run_work);
|
|
||||||
return -ENOMEM;
|
|
||||||
}
|
|
||||||
|
|
||||||
err = rhashtable_rehash_attach(ht, tbl, new_tbl);
|
err = rhashtable_rehash_attach(ht, tbl, new_tbl);
|
||||||
if (err) {
|
if (err) {
|
||||||
|
@ -426,12 +424,24 @@ int rhashtable_insert_rehash(struct rhashtable *ht)
|
||||||
schedule_work(&ht->run_work);
|
schedule_work(&ht->run_work);
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
|
fail:
|
||||||
|
/* Do not fail the insert if someone else did a rehash. */
|
||||||
|
if (likely(rcu_dereference_raw(tbl->future_tbl)))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
/* Schedule async rehash to retry allocation in process context. */
|
||||||
|
if (err == -ENOMEM)
|
||||||
|
schedule_work(&ht->run_work);
|
||||||
|
|
||||||
|
return err;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(rhashtable_insert_rehash);
|
EXPORT_SYMBOL_GPL(rhashtable_insert_rehash);
|
||||||
|
|
||||||
int rhashtable_insert_slow(struct rhashtable *ht, const void *key,
|
struct bucket_table *rhashtable_insert_slow(struct rhashtable *ht,
|
||||||
struct rhash_head *obj,
|
const void *key,
|
||||||
struct bucket_table *tbl)
|
struct rhash_head *obj,
|
||||||
|
struct bucket_table *tbl)
|
||||||
{
|
{
|
||||||
struct rhash_head *head;
|
struct rhash_head *head;
|
||||||
unsigned int hash;
|
unsigned int hash;
|
||||||
|
@ -467,7 +477,12 @@ int rhashtable_insert_slow(struct rhashtable *ht, const void *key,
|
||||||
exit:
|
exit:
|
||||||
spin_unlock(rht_bucket_lock(tbl, hash));
|
spin_unlock(rht_bucket_lock(tbl, hash));
|
||||||
|
|
||||||
return err;
|
if (err == 0)
|
||||||
|
return NULL;
|
||||||
|
else if (err == -EAGAIN)
|
||||||
|
return tbl;
|
||||||
|
else
|
||||||
|
return ERR_PTR(err);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(rhashtable_insert_slow);
|
EXPORT_SYMBOL_GPL(rhashtable_insert_slow);
|
||||||
|
|
||||||
|
@ -503,10 +518,10 @@ int rhashtable_walk_init(struct rhashtable *ht, struct rhashtable_iter *iter)
|
||||||
if (!iter->walker)
|
if (!iter->walker)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
mutex_lock(&ht->mutex);
|
spin_lock(&ht->lock);
|
||||||
iter->walker->tbl = rht_dereference(ht->tbl, ht);
|
iter->walker->tbl = rht_dereference(ht->tbl, ht);
|
||||||
list_add(&iter->walker->list, &iter->walker->tbl->walkers);
|
list_add(&iter->walker->list, &iter->walker->tbl->walkers);
|
||||||
mutex_unlock(&ht->mutex);
|
spin_unlock(&ht->lock);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -520,10 +535,10 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_init);
|
||||||
*/
|
*/
|
||||||
void rhashtable_walk_exit(struct rhashtable_iter *iter)
|
void rhashtable_walk_exit(struct rhashtable_iter *iter)
|
||||||
{
|
{
|
||||||
mutex_lock(&iter->ht->mutex);
|
spin_lock(&iter->ht->lock);
|
||||||
if (iter->walker->tbl)
|
if (iter->walker->tbl)
|
||||||
list_del(&iter->walker->list);
|
list_del(&iter->walker->list);
|
||||||
mutex_unlock(&iter->ht->mutex);
|
spin_unlock(&iter->ht->lock);
|
||||||
kfree(iter->walker);
|
kfree(iter->walker);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(rhashtable_walk_exit);
|
EXPORT_SYMBOL_GPL(rhashtable_walk_exit);
|
||||||
|
@ -547,14 +562,12 @@ int rhashtable_walk_start(struct rhashtable_iter *iter)
|
||||||
{
|
{
|
||||||
struct rhashtable *ht = iter->ht;
|
struct rhashtable *ht = iter->ht;
|
||||||
|
|
||||||
mutex_lock(&ht->mutex);
|
|
||||||
|
|
||||||
if (iter->walker->tbl)
|
|
||||||
list_del(&iter->walker->list);
|
|
||||||
|
|
||||||
rcu_read_lock();
|
rcu_read_lock();
|
||||||
|
|
||||||
mutex_unlock(&ht->mutex);
|
spin_lock(&ht->lock);
|
||||||
|
if (iter->walker->tbl)
|
||||||
|
list_del(&iter->walker->list);
|
||||||
|
spin_unlock(&ht->lock);
|
||||||
|
|
||||||
if (!iter->walker->tbl) {
|
if (!iter->walker->tbl) {
|
||||||
iter->walker->tbl = rht_dereference_rcu(ht->tbl, ht);
|
iter->walker->tbl = rht_dereference_rcu(ht->tbl, ht);
|
||||||
|
@ -723,9 +736,6 @@ int rhashtable_init(struct rhashtable *ht,
|
||||||
if (params->nulls_base && params->nulls_base < (1U << RHT_BASE_SHIFT))
|
if (params->nulls_base && params->nulls_base < (1U << RHT_BASE_SHIFT))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
if (params->nelem_hint)
|
|
||||||
size = rounded_hashtable_size(params);
|
|
||||||
|
|
||||||
memset(ht, 0, sizeof(*ht));
|
memset(ht, 0, sizeof(*ht));
|
||||||
mutex_init(&ht->mutex);
|
mutex_init(&ht->mutex);
|
||||||
spin_lock_init(&ht->lock);
|
spin_lock_init(&ht->lock);
|
||||||
|
@ -745,6 +755,9 @@ int rhashtable_init(struct rhashtable *ht,
|
||||||
|
|
||||||
ht->p.min_size = max(ht->p.min_size, HASH_MIN_SIZE);
|
ht->p.min_size = max(ht->p.min_size, HASH_MIN_SIZE);
|
||||||
|
|
||||||
|
if (params->nelem_hint)
|
||||||
|
size = rounded_hashtable_size(&ht->p);
|
||||||
|
|
||||||
/* The maximum (not average) chain length grows with the
|
/* The maximum (not average) chain length grows with the
|
||||||
* size of the hash table, at a rate of (log N)/(log log N).
|
* size of the hash table, at a rate of (log N)/(log log N).
|
||||||
* The value of 16 is selected so that even if the hash
|
* The value of 16 is selected so that even if the hash
|
||||||
|
|
|
@ -805,6 +805,9 @@ static int ax25_create(struct net *net, struct socket *sock, int protocol,
|
||||||
struct sock *sk;
|
struct sock *sk;
|
||||||
ax25_cb *ax25;
|
ax25_cb *ax25;
|
||||||
|
|
||||||
|
if (protocol < 0 || protocol > SK_PROTOCOL_MAX)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
if (!net_eq(net, &init_net))
|
if (!net_eq(net, &init_net))
|
||||||
return -EAFNOSUPPORT;
|
return -EAFNOSUPPORT;
|
||||||
|
|
||||||
|
|
|
@ -566,6 +566,7 @@ batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst)
|
||||||
int select;
|
int select;
|
||||||
batadv_dat_addr_t last_max = BATADV_DAT_ADDR_MAX, ip_key;
|
batadv_dat_addr_t last_max = BATADV_DAT_ADDR_MAX, ip_key;
|
||||||
struct batadv_dat_candidate *res;
|
struct batadv_dat_candidate *res;
|
||||||
|
struct batadv_dat_entry dat;
|
||||||
|
|
||||||
if (!bat_priv->orig_hash)
|
if (!bat_priv->orig_hash)
|
||||||
return NULL;
|
return NULL;
|
||||||
|
@ -575,7 +576,9 @@ batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst)
|
||||||
if (!res)
|
if (!res)
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
ip_key = (batadv_dat_addr_t)batadv_hash_dat(&ip_dst,
|
dat.ip = ip_dst;
|
||||||
|
dat.vid = 0;
|
||||||
|
ip_key = (batadv_dat_addr_t)batadv_hash_dat(&dat,
|
||||||
BATADV_DAT_ADDR_MAX);
|
BATADV_DAT_ADDR_MAX);
|
||||||
|
|
||||||
batadv_dbg(BATADV_DBG_DAT, bat_priv,
|
batadv_dbg(BATADV_DBG_DAT, bat_priv,
|
||||||
|
|
|
@ -836,6 +836,7 @@ int batadv_recv_unicast_packet(struct sk_buff *skb,
|
||||||
u8 *orig_addr;
|
u8 *orig_addr;
|
||||||
struct batadv_orig_node *orig_node = NULL;
|
struct batadv_orig_node *orig_node = NULL;
|
||||||
int check, hdr_size = sizeof(*unicast_packet);
|
int check, hdr_size = sizeof(*unicast_packet);
|
||||||
|
enum batadv_subtype subtype;
|
||||||
bool is4addr;
|
bool is4addr;
|
||||||
|
|
||||||
unicast_packet = (struct batadv_unicast_packet *)skb->data;
|
unicast_packet = (struct batadv_unicast_packet *)skb->data;
|
||||||
|
@ -863,10 +864,20 @@ int batadv_recv_unicast_packet(struct sk_buff *skb,
|
||||||
/* packet for me */
|
/* packet for me */
|
||||||
if (batadv_is_my_mac(bat_priv, unicast_packet->dest)) {
|
if (batadv_is_my_mac(bat_priv, unicast_packet->dest)) {
|
||||||
if (is4addr) {
|
if (is4addr) {
|
||||||
batadv_dat_inc_counter(bat_priv,
|
subtype = unicast_4addr_packet->subtype;
|
||||||
unicast_4addr_packet->subtype);
|
batadv_dat_inc_counter(bat_priv, subtype);
|
||||||
orig_addr = unicast_4addr_packet->src;
|
|
||||||
orig_node = batadv_orig_hash_find(bat_priv, orig_addr);
|
/* Only payload data should be considered for speedy
|
||||||
|
* join. For example, DAT also uses unicast 4addr
|
||||||
|
* types, but those packets should not be considered
|
||||||
|
* for speedy join, since the clients do not actually
|
||||||
|
* reside at the sending originator.
|
||||||
|
*/
|
||||||
|
if (subtype == BATADV_P_DATA) {
|
||||||
|
orig_addr = unicast_4addr_packet->src;
|
||||||
|
orig_node = batadv_orig_hash_find(bat_priv,
|
||||||
|
orig_addr);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (batadv_dat_snoop_incoming_arp_request(bat_priv, skb,
|
if (batadv_dat_snoop_incoming_arp_request(bat_priv, skb,
|
||||||
|
|
|
@ -68,13 +68,15 @@ static void batadv_tt_global_del(struct batadv_priv *bat_priv,
|
||||||
unsigned short vid, const char *message,
|
unsigned short vid, const char *message,
|
||||||
bool roaming);
|
bool roaming);
|
||||||
|
|
||||||
/* returns 1 if they are the same mac addr */
|
/* returns 1 if they are the same mac addr and vid */
|
||||||
static int batadv_compare_tt(const struct hlist_node *node, const void *data2)
|
static int batadv_compare_tt(const struct hlist_node *node, const void *data2)
|
||||||
{
|
{
|
||||||
const void *data1 = container_of(node, struct batadv_tt_common_entry,
|
const void *data1 = container_of(node, struct batadv_tt_common_entry,
|
||||||
hash_entry);
|
hash_entry);
|
||||||
|
const struct batadv_tt_common_entry *tt1 = data1;
|
||||||
|
const struct batadv_tt_common_entry *tt2 = data2;
|
||||||
|
|
||||||
return batadv_compare_eth(data1, data2);
|
return (tt1->vid == tt2->vid) && batadv_compare_eth(data1, data2);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -1427,9 +1429,15 @@ static bool batadv_tt_global_add(struct batadv_priv *bat_priv,
|
||||||
}
|
}
|
||||||
|
|
||||||
/* if the client was temporary added before receiving the first
|
/* if the client was temporary added before receiving the first
|
||||||
* OGM announcing it, we have to clear the TEMP flag
|
* OGM announcing it, we have to clear the TEMP flag. Also,
|
||||||
|
* remove the previous temporary orig node and re-add it
|
||||||
|
* if required. If the orig entry changed, the new one which
|
||||||
|
* is a non-temporary entry is preferred.
|
||||||
*/
|
*/
|
||||||
common->flags &= ~BATADV_TT_CLIENT_TEMP;
|
if (common->flags & BATADV_TT_CLIENT_TEMP) {
|
||||||
|
batadv_tt_global_del_orig_list(tt_global_entry);
|
||||||
|
common->flags &= ~BATADV_TT_CLIENT_TEMP;
|
||||||
|
}
|
||||||
|
|
||||||
/* the change can carry possible "attribute" flags like the
|
/* the change can carry possible "attribute" flags like the
|
||||||
* TT_CLIENT_WIFI, therefore they have to be copied in the
|
* TT_CLIENT_WIFI, therefore they have to be copied in the
|
||||||
|
|
|
@ -526,6 +526,9 @@ static int sco_sock_bind(struct socket *sock, struct sockaddr *addr,
|
||||||
if (!addr || addr->sa_family != AF_BLUETOOTH)
|
if (!addr || addr->sa_family != AF_BLUETOOTH)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
if (addr_len < sizeof(struct sockaddr_sco))
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
lock_sock(sk);
|
lock_sock(sk);
|
||||||
|
|
||||||
if (sk->sk_state != BT_OPEN) {
|
if (sk->sk_state != BT_OPEN) {
|
||||||
|
|
|
@ -3643,7 +3643,8 @@ static void __skb_complete_tx_timestamp(struct sk_buff *skb,
|
||||||
serr->ee.ee_info = tstype;
|
serr->ee.ee_info = tstype;
|
||||||
if (sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID) {
|
if (sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID) {
|
||||||
serr->ee.ee_data = skb_shinfo(skb)->tskey;
|
serr->ee.ee_data = skb_shinfo(skb)->tskey;
|
||||||
if (sk->sk_protocol == IPPROTO_TCP)
|
if (sk->sk_protocol == IPPROTO_TCP &&
|
||||||
|
sk->sk_type == SOCK_STREAM)
|
||||||
serr->ee.ee_data -= sk->sk_tskey;
|
serr->ee.ee_data -= sk->sk_tskey;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -4268,7 +4269,7 @@ static struct sk_buff *skb_reorder_vlan_header(struct sk_buff *skb)
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
memmove(skb->data - ETH_HLEN, skb->data - skb->mac_len,
|
memmove(skb->data - ETH_HLEN, skb->data - skb->mac_len - VLAN_HLEN,
|
||||||
2 * ETH_ALEN);
|
2 * ETH_ALEN);
|
||||||
skb->mac_header += VLAN_HLEN;
|
skb->mac_header += VLAN_HLEN;
|
||||||
return skb;
|
return skb;
|
||||||
|
|
|
@ -433,8 +433,6 @@ static bool sock_needs_netstamp(const struct sock *sk)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#define SK_FLAGS_TIMESTAMP ((1UL << SOCK_TIMESTAMP) | (1UL << SOCK_TIMESTAMPING_RX_SOFTWARE))
|
|
||||||
|
|
||||||
static void sock_disable_timestamp(struct sock *sk, unsigned long flags)
|
static void sock_disable_timestamp(struct sock *sk, unsigned long flags)
|
||||||
{
|
{
|
||||||
if (sk->sk_flags & flags) {
|
if (sk->sk_flags & flags) {
|
||||||
|
@ -874,7 +872,8 @@ int sock_setsockopt(struct socket *sock, int level, int optname,
|
||||||
|
|
||||||
if (val & SOF_TIMESTAMPING_OPT_ID &&
|
if (val & SOF_TIMESTAMPING_OPT_ID &&
|
||||||
!(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID)) {
|
!(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID)) {
|
||||||
if (sk->sk_protocol == IPPROTO_TCP) {
|
if (sk->sk_protocol == IPPROTO_TCP &&
|
||||||
|
sk->sk_type == SOCK_STREAM) {
|
||||||
if (sk->sk_state != TCP_ESTABLISHED) {
|
if (sk->sk_state != TCP_ESTABLISHED) {
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
break;
|
break;
|
||||||
|
@ -1552,7 +1551,7 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
|
||||||
*/
|
*/
|
||||||
is_charged = sk_filter_charge(newsk, filter);
|
is_charged = sk_filter_charge(newsk, filter);
|
||||||
|
|
||||||
if (unlikely(!is_charged || xfrm_sk_clone_policy(newsk))) {
|
if (unlikely(!is_charged || xfrm_sk_clone_policy(newsk, sk))) {
|
||||||
/* It is still raw copy of parent, so invalidate
|
/* It is still raw copy of parent, so invalidate
|
||||||
* destructor and make plain sk_free() */
|
* destructor and make plain sk_free() */
|
||||||
newsk->sk_destruct = NULL;
|
newsk->sk_destruct = NULL;
|
||||||
|
|
|
@ -678,6 +678,9 @@ static int dn_create(struct net *net, struct socket *sock, int protocol,
|
||||||
{
|
{
|
||||||
struct sock *sk;
|
struct sock *sk;
|
||||||
|
|
||||||
|
if (protocol < 0 || protocol > SK_PROTOCOL_MAX)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
if (!net_eq(net, &init_net))
|
if (!net_eq(net, &init_net))
|
||||||
return -EAFNOSUPPORT;
|
return -EAFNOSUPPORT;
|
||||||
|
|
||||||
|
|
|
@ -257,6 +257,9 @@ static int inet_create(struct net *net, struct socket *sock, int protocol,
|
||||||
int try_loading_module = 0;
|
int try_loading_module = 0;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
|
if (protocol < 0 || protocol >= IPPROTO_MAX)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
sock->state = SS_UNCONNECTED;
|
sock->state = SS_UNCONNECTED;
|
||||||
|
|
||||||
/* Look for the requested type/protocol pair. */
|
/* Look for the requested type/protocol pair. */
|
||||||
|
|
|
@ -1155,6 +1155,7 @@ static int fib_inetaddr_event(struct notifier_block *this, unsigned long event,
|
||||||
static int fib_netdev_event(struct notifier_block *this, unsigned long event, void *ptr)
|
static int fib_netdev_event(struct notifier_block *this, unsigned long event, void *ptr)
|
||||||
{
|
{
|
||||||
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
|
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
|
||||||
|
struct netdev_notifier_changeupper_info *info;
|
||||||
struct in_device *in_dev;
|
struct in_device *in_dev;
|
||||||
struct net *net = dev_net(dev);
|
struct net *net = dev_net(dev);
|
||||||
unsigned int flags;
|
unsigned int flags;
|
||||||
|
@ -1193,6 +1194,14 @@ static int fib_netdev_event(struct notifier_block *this, unsigned long event, vo
|
||||||
case NETDEV_CHANGEMTU:
|
case NETDEV_CHANGEMTU:
|
||||||
rt_cache_flush(net);
|
rt_cache_flush(net);
|
||||||
break;
|
break;
|
||||||
|
case NETDEV_CHANGEUPPER:
|
||||||
|
info = ptr;
|
||||||
|
/* flush all routes if dev is linked to or unlinked from
|
||||||
|
* an L3 master device (e.g., VRF)
|
||||||
|
*/
|
||||||
|
if (info->upper_dev && netif_is_l3_master(info->upper_dev))
|
||||||
|
fib_disable_ip(dev, NETDEV_DOWN, true);
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
return NOTIFY_DONE;
|
return NOTIFY_DONE;
|
||||||
}
|
}
|
||||||
|
|
|
@ -24,6 +24,7 @@ struct fou {
|
||||||
u16 type;
|
u16 type;
|
||||||
struct udp_offload udp_offloads;
|
struct udp_offload udp_offloads;
|
||||||
struct list_head list;
|
struct list_head list;
|
||||||
|
struct rcu_head rcu;
|
||||||
};
|
};
|
||||||
|
|
||||||
#define FOU_F_REMCSUM_NOPARTIAL BIT(0)
|
#define FOU_F_REMCSUM_NOPARTIAL BIT(0)
|
||||||
|
@ -417,7 +418,7 @@ static void fou_release(struct fou *fou)
|
||||||
list_del(&fou->list);
|
list_del(&fou->list);
|
||||||
udp_tunnel_sock_release(sock);
|
udp_tunnel_sock_release(sock);
|
||||||
|
|
||||||
kfree(fou);
|
kfree_rcu(fou, rcu);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int fou_encap_init(struct sock *sk, struct fou *fou, struct fou_cfg *cfg)
|
static int fou_encap_init(struct sock *sk, struct fou *fou, struct fou_cfg *cfg)
|
||||||
|
|
|
@ -60,6 +60,7 @@ config NFT_REJECT_IPV4
|
||||||
|
|
||||||
config NFT_DUP_IPV4
|
config NFT_DUP_IPV4
|
||||||
tristate "IPv4 nf_tables packet duplication support"
|
tristate "IPv4 nf_tables packet duplication support"
|
||||||
|
depends on !NF_CONNTRACK || NF_CONNTRACK
|
||||||
select NF_DUP_IPV4
|
select NF_DUP_IPV4
|
||||||
help
|
help
|
||||||
This module enables IPv4 packet duplication support for nf_tables.
|
This module enables IPv4 packet duplication support for nf_tables.
|
||||||
|
|
|
@ -1493,7 +1493,7 @@ bool tcp_prequeue(struct sock *sk, struct sk_buff *skb)
|
||||||
if (likely(sk->sk_rx_dst))
|
if (likely(sk->sk_rx_dst))
|
||||||
skb_dst_drop(skb);
|
skb_dst_drop(skb);
|
||||||
else
|
else
|
||||||
skb_dst_force(skb);
|
skb_dst_force_safe(skb);
|
||||||
|
|
||||||
__skb_queue_tail(&tp->ucopy.prequeue, skb);
|
__skb_queue_tail(&tp->ucopy.prequeue, skb);
|
||||||
tp->ucopy.memory += skb->truesize;
|
tp->ucopy.memory += skb->truesize;
|
||||||
|
@ -1721,8 +1721,7 @@ void inet_sk_rx_dst_set(struct sock *sk, const struct sk_buff *skb)
|
||||||
{
|
{
|
||||||
struct dst_entry *dst = skb_dst(skb);
|
struct dst_entry *dst = skb_dst(skb);
|
||||||
|
|
||||||
if (dst) {
|
if (dst && dst_hold_safe(dst)) {
|
||||||
dst_hold(dst);
|
|
||||||
sk->sk_rx_dst = dst;
|
sk->sk_rx_dst = dst;
|
||||||
inet_sk(sk)->rx_dst_ifindex = skb->skb_iif;
|
inet_sk(sk)->rx_dst_ifindex = skb->skb_iif;
|
||||||
}
|
}
|
||||||
|
|
|
@ -3150,7 +3150,7 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn)
|
||||||
{
|
{
|
||||||
struct tcp_sock *tp = tcp_sk(sk);
|
struct tcp_sock *tp = tcp_sk(sk);
|
||||||
struct tcp_fastopen_request *fo = tp->fastopen_req;
|
struct tcp_fastopen_request *fo = tp->fastopen_req;
|
||||||
int syn_loss = 0, space, err = 0, copied;
|
int syn_loss = 0, space, err = 0;
|
||||||
unsigned long last_syn_loss = 0;
|
unsigned long last_syn_loss = 0;
|
||||||
struct sk_buff *syn_data;
|
struct sk_buff *syn_data;
|
||||||
|
|
||||||
|
@ -3188,17 +3188,18 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn)
|
||||||
goto fallback;
|
goto fallback;
|
||||||
syn_data->ip_summed = CHECKSUM_PARTIAL;
|
syn_data->ip_summed = CHECKSUM_PARTIAL;
|
||||||
memcpy(syn_data->cb, syn->cb, sizeof(syn->cb));
|
memcpy(syn_data->cb, syn->cb, sizeof(syn->cb));
|
||||||
copied = copy_from_iter(skb_put(syn_data, space), space,
|
if (space) {
|
||||||
&fo->data->msg_iter);
|
int copied = copy_from_iter(skb_put(syn_data, space), space,
|
||||||
if (unlikely(!copied)) {
|
&fo->data->msg_iter);
|
||||||
kfree_skb(syn_data);
|
if (unlikely(!copied)) {
|
||||||
goto fallback;
|
kfree_skb(syn_data);
|
||||||
|
goto fallback;
|
||||||
|
}
|
||||||
|
if (copied != space) {
|
||||||
|
skb_trim(syn_data, copied);
|
||||||
|
space = copied;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
if (copied != space) {
|
|
||||||
skb_trim(syn_data, copied);
|
|
||||||
space = copied;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* No more data pending in inet_wait_for_connect() */
|
/* No more data pending in inet_wait_for_connect() */
|
||||||
if (space == fo->size)
|
if (space == fo->size)
|
||||||
fo->data = NULL;
|
fo->data = NULL;
|
||||||
|
|
|
@ -350,6 +350,12 @@ static struct inet6_dev *ipv6_add_dev(struct net_device *dev)
|
||||||
setup_timer(&ndev->rs_timer, addrconf_rs_timer,
|
setup_timer(&ndev->rs_timer, addrconf_rs_timer,
|
||||||
(unsigned long)ndev);
|
(unsigned long)ndev);
|
||||||
memcpy(&ndev->cnf, dev_net(dev)->ipv6.devconf_dflt, sizeof(ndev->cnf));
|
memcpy(&ndev->cnf, dev_net(dev)->ipv6.devconf_dflt, sizeof(ndev->cnf));
|
||||||
|
|
||||||
|
if (ndev->cnf.stable_secret.initialized)
|
||||||
|
ndev->addr_gen_mode = IN6_ADDR_GEN_MODE_STABLE_PRIVACY;
|
||||||
|
else
|
||||||
|
ndev->addr_gen_mode = IN6_ADDR_GEN_MODE_EUI64;
|
||||||
|
|
||||||
ndev->cnf.mtu6 = dev->mtu;
|
ndev->cnf.mtu6 = dev->mtu;
|
||||||
ndev->cnf.sysctl = NULL;
|
ndev->cnf.sysctl = NULL;
|
||||||
ndev->nd_parms = neigh_parms_alloc(dev, &nd_tbl);
|
ndev->nd_parms = neigh_parms_alloc(dev, &nd_tbl);
|
||||||
|
@ -2455,7 +2461,7 @@ void addrconf_prefix_rcv(struct net_device *dev, u8 *opt, int len, bool sllao)
|
||||||
#ifdef CONFIG_IPV6_OPTIMISTIC_DAD
|
#ifdef CONFIG_IPV6_OPTIMISTIC_DAD
|
||||||
if (in6_dev->cnf.optimistic_dad &&
|
if (in6_dev->cnf.optimistic_dad &&
|
||||||
!net->ipv6.devconf_all->forwarding && sllao)
|
!net->ipv6.devconf_all->forwarding && sllao)
|
||||||
addr_flags = IFA_F_OPTIMISTIC;
|
addr_flags |= IFA_F_OPTIMISTIC;
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
/* Do not allow to create too much of autoconfigured
|
/* Do not allow to create too much of autoconfigured
|
||||||
|
|
|
@ -109,6 +109,9 @@ static int inet6_create(struct net *net, struct socket *sock, int protocol,
|
||||||
int try_loading_module = 0;
|
int try_loading_module = 0;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
|
if (protocol < 0 || protocol >= IPPROTO_MAX)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
/* Look for the requested type/protocol pair. */
|
/* Look for the requested type/protocol pair. */
|
||||||
lookup_protocol:
|
lookup_protocol:
|
||||||
err = -ESOCKTNOSUPPORT;
|
err = -ESOCKTNOSUPPORT;
|
||||||
|
|
|
@ -1571,13 +1571,11 @@ static int ip6gre_changelink(struct net_device *dev, struct nlattr *tb[],
|
||||||
return -EEXIST;
|
return -EEXIST;
|
||||||
} else {
|
} else {
|
||||||
t = nt;
|
t = nt;
|
||||||
|
|
||||||
ip6gre_tunnel_unlink(ign, t);
|
|
||||||
ip6gre_tnl_change(t, &p, !tb[IFLA_MTU]);
|
|
||||||
ip6gre_tunnel_link(ign, t);
|
|
||||||
netdev_state_change(dev);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ip6gre_tunnel_unlink(ign, t);
|
||||||
|
ip6gre_tnl_change(t, &p, !tb[IFLA_MTU]);
|
||||||
|
ip6gre_tunnel_link(ign, t);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -49,6 +49,7 @@ config NFT_REJECT_IPV6
|
||||||
|
|
||||||
config NFT_DUP_IPV6
|
config NFT_DUP_IPV6
|
||||||
tristate "IPv6 nf_tables packet duplication support"
|
tristate "IPv6 nf_tables packet duplication support"
|
||||||
|
depends on !NF_CONNTRACK || NF_CONNTRACK
|
||||||
select NF_DUP_IPV6
|
select NF_DUP_IPV6
|
||||||
help
|
help
|
||||||
This module enables IPv6 packet duplication support for nf_tables.
|
This module enables IPv6 packet duplication support for nf_tables.
|
||||||
|
|
|
@ -93,10 +93,9 @@ static void inet6_sk_rx_dst_set(struct sock *sk, const struct sk_buff *skb)
|
||||||
{
|
{
|
||||||
struct dst_entry *dst = skb_dst(skb);
|
struct dst_entry *dst = skb_dst(skb);
|
||||||
|
|
||||||
if (dst) {
|
if (dst && dst_hold_safe(dst)) {
|
||||||
const struct rt6_info *rt = (const struct rt6_info *)dst;
|
const struct rt6_info *rt = (const struct rt6_info *)dst;
|
||||||
|
|
||||||
dst_hold(dst);
|
|
||||||
sk->sk_rx_dst = dst;
|
sk->sk_rx_dst = dst;
|
||||||
inet_sk(sk)->rx_dst_ifindex = skb->skb_iif;
|
inet_sk(sk)->rx_dst_ifindex = skb->skb_iif;
|
||||||
inet6_sk(sk)->rx_dst_cookie = rt6_get_cookie(rt);
|
inet6_sk(sk)->rx_dst_cookie = rt6_get_cookie(rt);
|
||||||
|
|
|
@ -1086,6 +1086,9 @@ static int irda_create(struct net *net, struct socket *sock, int protocol,
|
||||||
struct sock *sk;
|
struct sock *sk;
|
||||||
struct irda_sock *self;
|
struct irda_sock *self;
|
||||||
|
|
||||||
|
if (protocol < 0 || protocol > SK_PROTOCOL_MAX)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
if (net != &init_net)
|
if (net != &init_net)
|
||||||
return -EAFNOSUPPORT;
|
return -EAFNOSUPPORT;
|
||||||
|
|
||||||
|
|
|
@ -1169,8 +1169,7 @@ static int sta_apply_parameters(struct ieee80211_local *local,
|
||||||
* rc isn't initialized here yet, so ignore it
|
* rc isn't initialized here yet, so ignore it
|
||||||
*/
|
*/
|
||||||
__ieee80211_vht_handle_opmode(sdata, sta,
|
__ieee80211_vht_handle_opmode(sdata, sta,
|
||||||
params->opmode_notif,
|
params->opmode_notif, band);
|
||||||
band, false);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (ieee80211_vif_is_mesh(&sdata->vif))
|
if (ieee80211_vif_is_mesh(&sdata->vif))
|
||||||
|
|
|
@ -1709,10 +1709,10 @@ enum ieee80211_sta_rx_bandwidth ieee80211_sta_cur_vht_bw(struct sta_info *sta);
|
||||||
void ieee80211_sta_set_rx_nss(struct sta_info *sta);
|
void ieee80211_sta_set_rx_nss(struct sta_info *sta);
|
||||||
u32 __ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata,
|
u32 __ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata,
|
||||||
struct sta_info *sta, u8 opmode,
|
struct sta_info *sta, u8 opmode,
|
||||||
enum ieee80211_band band, bool nss_only);
|
enum ieee80211_band band);
|
||||||
void ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata,
|
void ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata,
|
||||||
struct sta_info *sta, u8 opmode,
|
struct sta_info *sta, u8 opmode,
|
||||||
enum ieee80211_band band, bool nss_only);
|
enum ieee80211_band band);
|
||||||
void ieee80211_apply_vhtcap_overrides(struct ieee80211_sub_if_data *sdata,
|
void ieee80211_apply_vhtcap_overrides(struct ieee80211_sub_if_data *sdata,
|
||||||
struct ieee80211_sta_vht_cap *vht_cap);
|
struct ieee80211_sta_vht_cap *vht_cap);
|
||||||
void ieee80211_get_vht_mask_from_cap(__le16 vht_cap,
|
void ieee80211_get_vht_mask_from_cap(__le16 vht_cap,
|
||||||
|
|
|
@ -1379,21 +1379,26 @@ static u32 ieee80211_handle_pwr_constr(struct ieee80211_sub_if_data *sdata,
|
||||||
*/
|
*/
|
||||||
if (has_80211h_pwr &&
|
if (has_80211h_pwr &&
|
||||||
(!has_cisco_pwr || pwr_level_80211h <= pwr_level_cisco)) {
|
(!has_cisco_pwr || pwr_level_80211h <= pwr_level_cisco)) {
|
||||||
|
new_ap_level = pwr_level_80211h;
|
||||||
|
|
||||||
|
if (sdata->ap_power_level == new_ap_level)
|
||||||
|
return 0;
|
||||||
|
|
||||||
sdata_dbg(sdata,
|
sdata_dbg(sdata,
|
||||||
"Limiting TX power to %d (%d - %d) dBm as advertised by %pM\n",
|
"Limiting TX power to %d (%d - %d) dBm as advertised by %pM\n",
|
||||||
pwr_level_80211h, chan_pwr, pwr_reduction_80211h,
|
pwr_level_80211h, chan_pwr, pwr_reduction_80211h,
|
||||||
sdata->u.mgd.bssid);
|
sdata->u.mgd.bssid);
|
||||||
new_ap_level = pwr_level_80211h;
|
|
||||||
} else { /* has_cisco_pwr is always true here. */
|
} else { /* has_cisco_pwr is always true here. */
|
||||||
|
new_ap_level = pwr_level_cisco;
|
||||||
|
|
||||||
|
if (sdata->ap_power_level == new_ap_level)
|
||||||
|
return 0;
|
||||||
|
|
||||||
sdata_dbg(sdata,
|
sdata_dbg(sdata,
|
||||||
"Limiting TX power to %d dBm as advertised by %pM\n",
|
"Limiting TX power to %d dBm as advertised by %pM\n",
|
||||||
pwr_level_cisco, sdata->u.mgd.bssid);
|
pwr_level_cisco, sdata->u.mgd.bssid);
|
||||||
new_ap_level = pwr_level_cisco;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (sdata->ap_power_level == new_ap_level)
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
sdata->ap_power_level = new_ap_level;
|
sdata->ap_power_level = new_ap_level;
|
||||||
if (__ieee80211_recalc_txpower(sdata))
|
if (__ieee80211_recalc_txpower(sdata))
|
||||||
return BSS_CHANGED_TXPOWER;
|
return BSS_CHANGED_TXPOWER;
|
||||||
|
@ -3575,7 +3580,7 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata,
|
||||||
|
|
||||||
if (sta && elems.opmode_notif)
|
if (sta && elems.opmode_notif)
|
||||||
ieee80211_vht_handle_opmode(sdata, sta, *elems.opmode_notif,
|
ieee80211_vht_handle_opmode(sdata, sta, *elems.opmode_notif,
|
||||||
rx_status->band, true);
|
rx_status->band);
|
||||||
mutex_unlock(&local->sta_mtx);
|
mutex_unlock(&local->sta_mtx);
|
||||||
|
|
||||||
changed |= ieee80211_handle_pwr_constr(sdata, chan, mgmt,
|
changed |= ieee80211_handle_pwr_constr(sdata, chan, mgmt,
|
||||||
|
|
|
@ -2736,8 +2736,7 @@ ieee80211_rx_h_action(struct ieee80211_rx_data *rx)
|
||||||
opmode = mgmt->u.action.u.vht_opmode_notif.operating_mode;
|
opmode = mgmt->u.action.u.vht_opmode_notif.operating_mode;
|
||||||
|
|
||||||
ieee80211_vht_handle_opmode(rx->sdata, rx->sta,
|
ieee80211_vht_handle_opmode(rx->sdata, rx->sta,
|
||||||
opmode, status->band,
|
opmode, status->band);
|
||||||
false);
|
|
||||||
goto handled;
|
goto handled;
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
|
|
|
@ -1641,6 +1641,29 @@ void ieee80211_stop_device(struct ieee80211_local *local)
|
||||||
drv_stop(local);
|
drv_stop(local);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void ieee80211_flush_completed_scan(struct ieee80211_local *local,
|
||||||
|
bool aborted)
|
||||||
|
{
|
||||||
|
/* It's possible that we don't handle the scan completion in
|
||||||
|
* time during suspend, so if it's still marked as completed
|
||||||
|
* here, queue the work and flush it to clean things up.
|
||||||
|
* Instead of calling the worker function directly here, we
|
||||||
|
* really queue it to avoid potential races with other flows
|
||||||
|
* scheduling the same work.
|
||||||
|
*/
|
||||||
|
if (test_bit(SCAN_COMPLETED, &local->scanning)) {
|
||||||
|
/* If coming from reconfiguration failure, abort the scan so
|
||||||
|
* we don't attempt to continue a partial HW scan - which is
|
||||||
|
* possible otherwise if (e.g.) the 2.4 GHz portion was the
|
||||||
|
* completed scan, and a 5 GHz portion is still pending.
|
||||||
|
*/
|
||||||
|
if (aborted)
|
||||||
|
set_bit(SCAN_ABORTED, &local->scanning);
|
||||||
|
ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0);
|
||||||
|
flush_delayed_work(&local->scan_work);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static void ieee80211_handle_reconfig_failure(struct ieee80211_local *local)
|
static void ieee80211_handle_reconfig_failure(struct ieee80211_local *local)
|
||||||
{
|
{
|
||||||
struct ieee80211_sub_if_data *sdata;
|
struct ieee80211_sub_if_data *sdata;
|
||||||
|
@ -1660,6 +1683,8 @@ static void ieee80211_handle_reconfig_failure(struct ieee80211_local *local)
|
||||||
local->suspended = false;
|
local->suspended = false;
|
||||||
local->in_reconfig = false;
|
local->in_reconfig = false;
|
||||||
|
|
||||||
|
ieee80211_flush_completed_scan(local, true);
|
||||||
|
|
||||||
/* scheduled scan clearly can't be running any more, but tell
|
/* scheduled scan clearly can't be running any more, but tell
|
||||||
* cfg80211 and clear local state
|
* cfg80211 and clear local state
|
||||||
*/
|
*/
|
||||||
|
@ -1698,6 +1723,27 @@ static void ieee80211_assign_chanctx(struct ieee80211_local *local,
|
||||||
mutex_unlock(&local->chanctx_mtx);
|
mutex_unlock(&local->chanctx_mtx);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void ieee80211_reconfig_stations(struct ieee80211_sub_if_data *sdata)
|
||||||
|
{
|
||||||
|
struct ieee80211_local *local = sdata->local;
|
||||||
|
struct sta_info *sta;
|
||||||
|
|
||||||
|
/* add STAs back */
|
||||||
|
mutex_lock(&local->sta_mtx);
|
||||||
|
list_for_each_entry(sta, &local->sta_list, list) {
|
||||||
|
enum ieee80211_sta_state state;
|
||||||
|
|
||||||
|
if (!sta->uploaded || sta->sdata != sdata)
|
||||||
|
continue;
|
||||||
|
|
||||||
|
for (state = IEEE80211_STA_NOTEXIST;
|
||||||
|
state < sta->sta_state; state++)
|
||||||
|
WARN_ON(drv_sta_state(local, sta->sdata, sta, state,
|
||||||
|
state + 1));
|
||||||
|
}
|
||||||
|
mutex_unlock(&local->sta_mtx);
|
||||||
|
}
|
||||||
|
|
||||||
int ieee80211_reconfig(struct ieee80211_local *local)
|
int ieee80211_reconfig(struct ieee80211_local *local)
|
||||||
{
|
{
|
||||||
struct ieee80211_hw *hw = &local->hw;
|
struct ieee80211_hw *hw = &local->hw;
|
||||||
|
@ -1833,50 +1879,11 @@ int ieee80211_reconfig(struct ieee80211_local *local)
|
||||||
WARN_ON(drv_add_chanctx(local, ctx));
|
WARN_ON(drv_add_chanctx(local, ctx));
|
||||||
mutex_unlock(&local->chanctx_mtx);
|
mutex_unlock(&local->chanctx_mtx);
|
||||||
|
|
||||||
list_for_each_entry(sdata, &local->interfaces, list) {
|
|
||||||
if (!ieee80211_sdata_running(sdata))
|
|
||||||
continue;
|
|
||||||
ieee80211_assign_chanctx(local, sdata);
|
|
||||||
}
|
|
||||||
|
|
||||||
sdata = rtnl_dereference(local->monitor_sdata);
|
sdata = rtnl_dereference(local->monitor_sdata);
|
||||||
if (sdata && ieee80211_sdata_running(sdata))
|
if (sdata && ieee80211_sdata_running(sdata))
|
||||||
ieee80211_assign_chanctx(local, sdata);
|
ieee80211_assign_chanctx(local, sdata);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* add STAs back */
|
|
||||||
mutex_lock(&local->sta_mtx);
|
|
||||||
list_for_each_entry(sta, &local->sta_list, list) {
|
|
||||||
enum ieee80211_sta_state state;
|
|
||||||
|
|
||||||
if (!sta->uploaded)
|
|
||||||
continue;
|
|
||||||
|
|
||||||
/* AP-mode stations will be added later */
|
|
||||||
if (sta->sdata->vif.type == NL80211_IFTYPE_AP)
|
|
||||||
continue;
|
|
||||||
|
|
||||||
for (state = IEEE80211_STA_NOTEXIST;
|
|
||||||
state < sta->sta_state; state++)
|
|
||||||
WARN_ON(drv_sta_state(local, sta->sdata, sta, state,
|
|
||||||
state + 1));
|
|
||||||
}
|
|
||||||
mutex_unlock(&local->sta_mtx);
|
|
||||||
|
|
||||||
/* reconfigure tx conf */
|
|
||||||
if (hw->queues >= IEEE80211_NUM_ACS) {
|
|
||||||
list_for_each_entry(sdata, &local->interfaces, list) {
|
|
||||||
if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN ||
|
|
||||||
sdata->vif.type == NL80211_IFTYPE_MONITOR ||
|
|
||||||
!ieee80211_sdata_running(sdata))
|
|
||||||
continue;
|
|
||||||
|
|
||||||
for (i = 0; i < IEEE80211_NUM_ACS; i++)
|
|
||||||
drv_conf_tx(local, sdata, i,
|
|
||||||
&sdata->tx_conf[i]);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/* reconfigure hardware */
|
/* reconfigure hardware */
|
||||||
ieee80211_hw_config(local, ~0);
|
ieee80211_hw_config(local, ~0);
|
||||||
|
|
||||||
|
@ -1889,6 +1896,22 @@ int ieee80211_reconfig(struct ieee80211_local *local)
|
||||||
if (!ieee80211_sdata_running(sdata))
|
if (!ieee80211_sdata_running(sdata))
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
|
ieee80211_assign_chanctx(local, sdata);
|
||||||
|
|
||||||
|
switch (sdata->vif.type) {
|
||||||
|
case NL80211_IFTYPE_AP_VLAN:
|
||||||
|
case NL80211_IFTYPE_MONITOR:
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
ieee80211_reconfig_stations(sdata);
|
||||||
|
/* fall through */
|
||||||
|
case NL80211_IFTYPE_AP: /* AP stations are handled later */
|
||||||
|
for (i = 0; i < IEEE80211_NUM_ACS; i++)
|
||||||
|
drv_conf_tx(local, sdata, i,
|
||||||
|
&sdata->tx_conf[i]);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
/* common change flags for all interface types */
|
/* common change flags for all interface types */
|
||||||
changed = BSS_CHANGED_ERP_CTS_PROT |
|
changed = BSS_CHANGED_ERP_CTS_PROT |
|
||||||
BSS_CHANGED_ERP_PREAMBLE |
|
BSS_CHANGED_ERP_PREAMBLE |
|
||||||
|
@ -2074,17 +2097,7 @@ int ieee80211_reconfig(struct ieee80211_local *local)
|
||||||
mb();
|
mb();
|
||||||
local->resuming = false;
|
local->resuming = false;
|
||||||
|
|
||||||
/* It's possible that we don't handle the scan completion in
|
ieee80211_flush_completed_scan(local, false);
|
||||||
* time during suspend, so if it's still marked as completed
|
|
||||||
* here, queue the work and flush it to clean things up.
|
|
||||||
* Instead of calling the worker function directly here, we
|
|
||||||
* really queue it to avoid potential races with other flows
|
|
||||||
* scheduling the same work.
|
|
||||||
*/
|
|
||||||
if (test_bit(SCAN_COMPLETED, &local->scanning)) {
|
|
||||||
ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0);
|
|
||||||
flush_delayed_work(&local->scan_work);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (local->open_count && !reconfig_due_to_wowlan)
|
if (local->open_count && !reconfig_due_to_wowlan)
|
||||||
drv_reconfig_complete(local, IEEE80211_RECONFIG_TYPE_SUSPEND);
|
drv_reconfig_complete(local, IEEE80211_RECONFIG_TYPE_SUSPEND);
|
||||||
|
|
|
@ -378,7 +378,7 @@ void ieee80211_sta_set_rx_nss(struct sta_info *sta)
|
||||||
|
|
||||||
u32 __ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata,
|
u32 __ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata,
|
||||||
struct sta_info *sta, u8 opmode,
|
struct sta_info *sta, u8 opmode,
|
||||||
enum ieee80211_band band, bool nss_only)
|
enum ieee80211_band band)
|
||||||
{
|
{
|
||||||
struct ieee80211_local *local = sdata->local;
|
struct ieee80211_local *local = sdata->local;
|
||||||
struct ieee80211_supported_band *sband;
|
struct ieee80211_supported_band *sband;
|
||||||
|
@ -401,9 +401,6 @@ u32 __ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata,
|
||||||
changed |= IEEE80211_RC_NSS_CHANGED;
|
changed |= IEEE80211_RC_NSS_CHANGED;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (nss_only)
|
|
||||||
return changed;
|
|
||||||
|
|
||||||
switch (opmode & IEEE80211_OPMODE_NOTIF_CHANWIDTH_MASK) {
|
switch (opmode & IEEE80211_OPMODE_NOTIF_CHANWIDTH_MASK) {
|
||||||
case IEEE80211_OPMODE_NOTIF_CHANWIDTH_20MHZ:
|
case IEEE80211_OPMODE_NOTIF_CHANWIDTH_20MHZ:
|
||||||
sta->cur_max_bandwidth = IEEE80211_STA_RX_BW_20;
|
sta->cur_max_bandwidth = IEEE80211_STA_RX_BW_20;
|
||||||
|
@ -430,13 +427,12 @@ u32 __ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata,
|
||||||
|
|
||||||
void ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata,
|
void ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata,
|
||||||
struct sta_info *sta, u8 opmode,
|
struct sta_info *sta, u8 opmode,
|
||||||
enum ieee80211_band band, bool nss_only)
|
enum ieee80211_band band)
|
||||||
{
|
{
|
||||||
struct ieee80211_local *local = sdata->local;
|
struct ieee80211_local *local = sdata->local;
|
||||||
struct ieee80211_supported_band *sband = local->hw.wiphy->bands[band];
|
struct ieee80211_supported_band *sband = local->hw.wiphy->bands[band];
|
||||||
|
|
||||||
u32 changed = __ieee80211_vht_handle_opmode(sdata, sta, opmode,
|
u32 changed = __ieee80211_vht_handle_opmode(sdata, sta, opmode, band);
|
||||||
band, nss_only);
|
|
||||||
|
|
||||||
if (changed > 0)
|
if (changed > 0)
|
||||||
rate_control_rate_update(local, sband, sta, changed);
|
rate_control_rate_update(local, sband, sta, changed);
|
||||||
|
|
|
@ -27,6 +27,8 @@
|
||||||
*/
|
*/
|
||||||
#define MAX_MP_SELECT_LABELS 4
|
#define MAX_MP_SELECT_LABELS 4
|
||||||
|
|
||||||
|
#define MPLS_NEIGH_TABLE_UNSPEC (NEIGH_LINK_TABLE + 1)
|
||||||
|
|
||||||
static int zero = 0;
|
static int zero = 0;
|
||||||
static int label_limit = (1 << 20) - 1;
|
static int label_limit = (1 << 20) - 1;
|
||||||
|
|
||||||
|
@ -317,7 +319,13 @@ static int mpls_forward(struct sk_buff *skb, struct net_device *dev,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
err = neigh_xmit(nh->nh_via_table, out_dev, mpls_nh_via(rt, nh), skb);
|
/* If via wasn't specified then send out using device address */
|
||||||
|
if (nh->nh_via_table == MPLS_NEIGH_TABLE_UNSPEC)
|
||||||
|
err = neigh_xmit(NEIGH_LINK_TABLE, out_dev,
|
||||||
|
out_dev->dev_addr, skb);
|
||||||
|
else
|
||||||
|
err = neigh_xmit(nh->nh_via_table, out_dev,
|
||||||
|
mpls_nh_via(rt, nh), skb);
|
||||||
if (err)
|
if (err)
|
||||||
net_dbg_ratelimited("%s: packet transmission failed: %d\n",
|
net_dbg_ratelimited("%s: packet transmission failed: %d\n",
|
||||||
__func__, err);
|
__func__, err);
|
||||||
|
@ -534,6 +542,10 @@ static int mpls_nh_assign_dev(struct net *net, struct mpls_route *rt,
|
||||||
if (!mpls_dev_get(dev))
|
if (!mpls_dev_get(dev))
|
||||||
goto errout;
|
goto errout;
|
||||||
|
|
||||||
|
if ((nh->nh_via_table == NEIGH_LINK_TABLE) &&
|
||||||
|
(dev->addr_len != nh->nh_via_alen))
|
||||||
|
goto errout;
|
||||||
|
|
||||||
RCU_INIT_POINTER(nh->nh_dev, dev);
|
RCU_INIT_POINTER(nh->nh_dev, dev);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -592,10 +604,14 @@ static int mpls_nh_build(struct net *net, struct mpls_route *rt,
|
||||||
goto errout;
|
goto errout;
|
||||||
}
|
}
|
||||||
|
|
||||||
err = nla_get_via(via, &nh->nh_via_alen, &nh->nh_via_table,
|
if (via) {
|
||||||
__mpls_nh_via(rt, nh));
|
err = nla_get_via(via, &nh->nh_via_alen, &nh->nh_via_table,
|
||||||
if (err)
|
__mpls_nh_via(rt, nh));
|
||||||
goto errout;
|
if (err)
|
||||||
|
goto errout;
|
||||||
|
} else {
|
||||||
|
nh->nh_via_table = MPLS_NEIGH_TABLE_UNSPEC;
|
||||||
|
}
|
||||||
|
|
||||||
err = mpls_nh_assign_dev(net, rt, nh, oif);
|
err = mpls_nh_assign_dev(net, rt, nh, oif);
|
||||||
if (err)
|
if (err)
|
||||||
|
@ -677,9 +693,6 @@ static int mpls_nh_build_multi(struct mpls_route_config *cfg,
|
||||||
nla_newdst = nla_find(attrs, attrlen, RTA_NEWDST);
|
nla_newdst = nla_find(attrs, attrlen, RTA_NEWDST);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!nla_via)
|
|
||||||
goto errout;
|
|
||||||
|
|
||||||
err = mpls_nh_build(cfg->rc_nlinfo.nl_net, rt, nh,
|
err = mpls_nh_build(cfg->rc_nlinfo.nl_net, rt, nh,
|
||||||
rtnh->rtnh_ifindex, nla_via,
|
rtnh->rtnh_ifindex, nla_via,
|
||||||
nla_newdst);
|
nla_newdst);
|
||||||
|
@ -1118,6 +1131,7 @@ static int rtm_to_route_config(struct sk_buff *skb, struct nlmsghdr *nlh,
|
||||||
|
|
||||||
cfg->rc_label = LABEL_NOT_SPECIFIED;
|
cfg->rc_label = LABEL_NOT_SPECIFIED;
|
||||||
cfg->rc_protocol = rtm->rtm_protocol;
|
cfg->rc_protocol = rtm->rtm_protocol;
|
||||||
|
cfg->rc_via_table = MPLS_NEIGH_TABLE_UNSPEC;
|
||||||
cfg->rc_nlflags = nlh->nlmsg_flags;
|
cfg->rc_nlflags = nlh->nlmsg_flags;
|
||||||
cfg->rc_nlinfo.portid = NETLINK_CB(skb).portid;
|
cfg->rc_nlinfo.portid = NETLINK_CB(skb).portid;
|
||||||
cfg->rc_nlinfo.nlh = nlh;
|
cfg->rc_nlinfo.nlh = nlh;
|
||||||
|
@ -1231,7 +1245,8 @@ static int mpls_dump_route(struct sk_buff *skb, u32 portid, u32 seq, int event,
|
||||||
nla_put_labels(skb, RTA_NEWDST, nh->nh_labels,
|
nla_put_labels(skb, RTA_NEWDST, nh->nh_labels,
|
||||||
nh->nh_label))
|
nh->nh_label))
|
||||||
goto nla_put_failure;
|
goto nla_put_failure;
|
||||||
if (nla_put_via(skb, nh->nh_via_table, mpls_nh_via(rt, nh),
|
if (nh->nh_via_table != MPLS_NEIGH_TABLE_UNSPEC &&
|
||||||
|
nla_put_via(skb, nh->nh_via_table, mpls_nh_via(rt, nh),
|
||||||
nh->nh_via_alen))
|
nh->nh_via_alen))
|
||||||
goto nla_put_failure;
|
goto nla_put_failure;
|
||||||
dev = rtnl_dereference(nh->nh_dev);
|
dev = rtnl_dereference(nh->nh_dev);
|
||||||
|
@ -1257,7 +1272,8 @@ static int mpls_dump_route(struct sk_buff *skb, u32 portid, u32 seq, int event,
|
||||||
nh->nh_labels,
|
nh->nh_labels,
|
||||||
nh->nh_label))
|
nh->nh_label))
|
||||||
goto nla_put_failure;
|
goto nla_put_failure;
|
||||||
if (nla_put_via(skb, nh->nh_via_table,
|
if (nh->nh_via_table != MPLS_NEIGH_TABLE_UNSPEC &&
|
||||||
|
nla_put_via(skb, nh->nh_via_table,
|
||||||
mpls_nh_via(rt, nh),
|
mpls_nh_via(rt, nh),
|
||||||
nh->nh_via_alen))
|
nh->nh_via_alen))
|
||||||
goto nla_put_failure;
|
goto nla_put_failure;
|
||||||
|
@ -1319,7 +1335,8 @@ static inline size_t lfib_nlmsg_size(struct mpls_route *rt)
|
||||||
|
|
||||||
if (nh->nh_dev)
|
if (nh->nh_dev)
|
||||||
payload += nla_total_size(4); /* RTA_OIF */
|
payload += nla_total_size(4); /* RTA_OIF */
|
||||||
payload += nla_total_size(2 + nh->nh_via_alen); /* RTA_VIA */
|
if (nh->nh_via_table != MPLS_NEIGH_TABLE_UNSPEC) /* RTA_VIA */
|
||||||
|
payload += nla_total_size(2 + nh->nh_via_alen);
|
||||||
if (nh->nh_labels) /* RTA_NEWDST */
|
if (nh->nh_labels) /* RTA_NEWDST */
|
||||||
payload += nla_total_size(nh->nh_labels * 4);
|
payload += nla_total_size(nh->nh_labels * 4);
|
||||||
} else {
|
} else {
|
||||||
|
@ -1328,7 +1345,9 @@ static inline size_t lfib_nlmsg_size(struct mpls_route *rt)
|
||||||
|
|
||||||
for_nexthops(rt) {
|
for_nexthops(rt) {
|
||||||
nhsize += nla_total_size(sizeof(struct rtnexthop));
|
nhsize += nla_total_size(sizeof(struct rtnexthop));
|
||||||
nhsize += nla_total_size(2 + nh->nh_via_alen);
|
/* RTA_VIA */
|
||||||
|
if (nh->nh_via_table != MPLS_NEIGH_TABLE_UNSPEC)
|
||||||
|
nhsize += nla_total_size(2 + nh->nh_via_alen);
|
||||||
if (nh->nh_labels)
|
if (nh->nh_labels)
|
||||||
nhsize += nla_total_size(nh->nh_labels * 4);
|
nhsize += nla_total_size(nh->nh_labels * 4);
|
||||||
} endfor_nexthops(rt);
|
} endfor_nexthops(rt);
|
||||||
|
|
|
@ -54,10 +54,10 @@ int mpls_output(struct net *net, struct sock *sk, struct sk_buff *skb)
|
||||||
unsigned int ttl;
|
unsigned int ttl;
|
||||||
|
|
||||||
/* Obtain the ttl */
|
/* Obtain the ttl */
|
||||||
if (skb->protocol == htons(ETH_P_IP)) {
|
if (dst->ops->family == AF_INET) {
|
||||||
ttl = ip_hdr(skb)->ttl;
|
ttl = ip_hdr(skb)->ttl;
|
||||||
rt = (struct rtable *)dst;
|
rt = (struct rtable *)dst;
|
||||||
} else if (skb->protocol == htons(ETH_P_IPV6)) {
|
} else if (dst->ops->family == AF_INET6) {
|
||||||
ttl = ipv6_hdr(skb)->hop_limit;
|
ttl = ipv6_hdr(skb)->hop_limit;
|
||||||
rt6 = (struct rt6_info *)dst;
|
rt6 = (struct rt6_info *)dst;
|
||||||
} else {
|
} else {
|
||||||
|
|
|
@ -89,6 +89,7 @@ nf_tables_afinfo_lookup(struct net *net, int family, bool autoload)
|
||||||
}
|
}
|
||||||
|
|
||||||
static void nft_ctx_init(struct nft_ctx *ctx,
|
static void nft_ctx_init(struct nft_ctx *ctx,
|
||||||
|
struct net *net,
|
||||||
const struct sk_buff *skb,
|
const struct sk_buff *skb,
|
||||||
const struct nlmsghdr *nlh,
|
const struct nlmsghdr *nlh,
|
||||||
struct nft_af_info *afi,
|
struct nft_af_info *afi,
|
||||||
|
@ -96,7 +97,7 @@ static void nft_ctx_init(struct nft_ctx *ctx,
|
||||||
struct nft_chain *chain,
|
struct nft_chain *chain,
|
||||||
const struct nlattr * const *nla)
|
const struct nlattr * const *nla)
|
||||||
{
|
{
|
||||||
ctx->net = sock_net(skb->sk);
|
ctx->net = net;
|
||||||
ctx->afi = afi;
|
ctx->afi = afi;
|
||||||
ctx->table = table;
|
ctx->table = table;
|
||||||
ctx->chain = chain;
|
ctx->chain = chain;
|
||||||
|
@ -672,15 +673,14 @@ static int nf_tables_updtable(struct nft_ctx *ctx)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int nf_tables_newtable(struct sock *nlsk, struct sk_buff *skb,
|
static int nf_tables_newtable(struct net *net, struct sock *nlsk,
|
||||||
const struct nlmsghdr *nlh,
|
struct sk_buff *skb, const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const nla[])
|
const struct nlattr * const nla[])
|
||||||
{
|
{
|
||||||
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
||||||
const struct nlattr *name;
|
const struct nlattr *name;
|
||||||
struct nft_af_info *afi;
|
struct nft_af_info *afi;
|
||||||
struct nft_table *table;
|
struct nft_table *table;
|
||||||
struct net *net = sock_net(skb->sk);
|
|
||||||
int family = nfmsg->nfgen_family;
|
int family = nfmsg->nfgen_family;
|
||||||
u32 flags = 0;
|
u32 flags = 0;
|
||||||
struct nft_ctx ctx;
|
struct nft_ctx ctx;
|
||||||
|
@ -706,7 +706,7 @@ static int nf_tables_newtable(struct sock *nlsk, struct sk_buff *skb,
|
||||||
if (nlh->nlmsg_flags & NLM_F_REPLACE)
|
if (nlh->nlmsg_flags & NLM_F_REPLACE)
|
||||||
return -EOPNOTSUPP;
|
return -EOPNOTSUPP;
|
||||||
|
|
||||||
nft_ctx_init(&ctx, skb, nlh, afi, table, NULL, nla);
|
nft_ctx_init(&ctx, net, skb, nlh, afi, table, NULL, nla);
|
||||||
return nf_tables_updtable(&ctx);
|
return nf_tables_updtable(&ctx);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -730,7 +730,7 @@ static int nf_tables_newtable(struct sock *nlsk, struct sk_buff *skb,
|
||||||
INIT_LIST_HEAD(&table->sets);
|
INIT_LIST_HEAD(&table->sets);
|
||||||
table->flags = flags;
|
table->flags = flags;
|
||||||
|
|
||||||
nft_ctx_init(&ctx, skb, nlh, afi, table, NULL, nla);
|
nft_ctx_init(&ctx, net, skb, nlh, afi, table, NULL, nla);
|
||||||
err = nft_trans_table_add(&ctx, NFT_MSG_NEWTABLE);
|
err = nft_trans_table_add(&ctx, NFT_MSG_NEWTABLE);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto err3;
|
goto err3;
|
||||||
|
@ -810,18 +810,17 @@ static int nft_flush(struct nft_ctx *ctx, int family)
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int nf_tables_deltable(struct sock *nlsk, struct sk_buff *skb,
|
static int nf_tables_deltable(struct net *net, struct sock *nlsk,
|
||||||
const struct nlmsghdr *nlh,
|
struct sk_buff *skb, const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const nla[])
|
const struct nlattr * const nla[])
|
||||||
{
|
{
|
||||||
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
||||||
struct nft_af_info *afi;
|
struct nft_af_info *afi;
|
||||||
struct nft_table *table;
|
struct nft_table *table;
|
||||||
struct net *net = sock_net(skb->sk);
|
|
||||||
int family = nfmsg->nfgen_family;
|
int family = nfmsg->nfgen_family;
|
||||||
struct nft_ctx ctx;
|
struct nft_ctx ctx;
|
||||||
|
|
||||||
nft_ctx_init(&ctx, skb, nlh, NULL, NULL, NULL, nla);
|
nft_ctx_init(&ctx, net, skb, nlh, NULL, NULL, NULL, nla);
|
||||||
if (family == AF_UNSPEC || nla[NFTA_TABLE_NAME] == NULL)
|
if (family == AF_UNSPEC || nla[NFTA_TABLE_NAME] == NULL)
|
||||||
return nft_flush(&ctx, family);
|
return nft_flush(&ctx, family);
|
||||||
|
|
||||||
|
@ -1221,8 +1220,8 @@ static void nf_tables_chain_destroy(struct nft_chain *chain)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static int nf_tables_newchain(struct sock *nlsk, struct sk_buff *skb,
|
static int nf_tables_newchain(struct net *net, struct sock *nlsk,
|
||||||
const struct nlmsghdr *nlh,
|
struct sk_buff *skb, const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const nla[])
|
const struct nlattr * const nla[])
|
||||||
{
|
{
|
||||||
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
||||||
|
@ -1232,7 +1231,6 @@ static int nf_tables_newchain(struct sock *nlsk, struct sk_buff *skb,
|
||||||
struct nft_chain *chain;
|
struct nft_chain *chain;
|
||||||
struct nft_base_chain *basechain = NULL;
|
struct nft_base_chain *basechain = NULL;
|
||||||
struct nlattr *ha[NFTA_HOOK_MAX + 1];
|
struct nlattr *ha[NFTA_HOOK_MAX + 1];
|
||||||
struct net *net = sock_net(skb->sk);
|
|
||||||
int family = nfmsg->nfgen_family;
|
int family = nfmsg->nfgen_family;
|
||||||
struct net_device *dev = NULL;
|
struct net_device *dev = NULL;
|
||||||
u8 policy = NF_ACCEPT;
|
u8 policy = NF_ACCEPT;
|
||||||
|
@ -1313,7 +1311,7 @@ static int nf_tables_newchain(struct sock *nlsk, struct sk_buff *skb,
|
||||||
return PTR_ERR(stats);
|
return PTR_ERR(stats);
|
||||||
}
|
}
|
||||||
|
|
||||||
nft_ctx_init(&ctx, skb, nlh, afi, table, chain, nla);
|
nft_ctx_init(&ctx, net, skb, nlh, afi, table, chain, nla);
|
||||||
trans = nft_trans_alloc(&ctx, NFT_MSG_NEWCHAIN,
|
trans = nft_trans_alloc(&ctx, NFT_MSG_NEWCHAIN,
|
||||||
sizeof(struct nft_trans_chain));
|
sizeof(struct nft_trans_chain));
|
||||||
if (trans == NULL) {
|
if (trans == NULL) {
|
||||||
|
@ -1461,7 +1459,7 @@ static int nf_tables_newchain(struct sock *nlsk, struct sk_buff *skb,
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto err1;
|
goto err1;
|
||||||
|
|
||||||
nft_ctx_init(&ctx, skb, nlh, afi, table, chain, nla);
|
nft_ctx_init(&ctx, net, skb, nlh, afi, table, chain, nla);
|
||||||
err = nft_trans_chain_add(&ctx, NFT_MSG_NEWCHAIN);
|
err = nft_trans_chain_add(&ctx, NFT_MSG_NEWCHAIN);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto err2;
|
goto err2;
|
||||||
|
@ -1476,15 +1474,14 @@ static int nf_tables_newchain(struct sock *nlsk, struct sk_buff *skb,
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int nf_tables_delchain(struct sock *nlsk, struct sk_buff *skb,
|
static int nf_tables_delchain(struct net *net, struct sock *nlsk,
|
||||||
const struct nlmsghdr *nlh,
|
struct sk_buff *skb, const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const nla[])
|
const struct nlattr * const nla[])
|
||||||
{
|
{
|
||||||
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
||||||
struct nft_af_info *afi;
|
struct nft_af_info *afi;
|
||||||
struct nft_table *table;
|
struct nft_table *table;
|
||||||
struct nft_chain *chain;
|
struct nft_chain *chain;
|
||||||
struct net *net = sock_net(skb->sk);
|
|
||||||
int family = nfmsg->nfgen_family;
|
int family = nfmsg->nfgen_family;
|
||||||
struct nft_ctx ctx;
|
struct nft_ctx ctx;
|
||||||
|
|
||||||
|
@ -1506,7 +1503,7 @@ static int nf_tables_delchain(struct sock *nlsk, struct sk_buff *skb,
|
||||||
if (chain->use > 0)
|
if (chain->use > 0)
|
||||||
return -EBUSY;
|
return -EBUSY;
|
||||||
|
|
||||||
nft_ctx_init(&ctx, skb, nlh, afi, table, chain, nla);
|
nft_ctx_init(&ctx, net, skb, nlh, afi, table, chain, nla);
|
||||||
|
|
||||||
return nft_delchain(&ctx);
|
return nft_delchain(&ctx);
|
||||||
}
|
}
|
||||||
|
@ -2010,13 +2007,12 @@ static void nf_tables_rule_destroy(const struct nft_ctx *ctx,
|
||||||
|
|
||||||
static struct nft_expr_info *info;
|
static struct nft_expr_info *info;
|
||||||
|
|
||||||
static int nf_tables_newrule(struct sock *nlsk, struct sk_buff *skb,
|
static int nf_tables_newrule(struct net *net, struct sock *nlsk,
|
||||||
const struct nlmsghdr *nlh,
|
struct sk_buff *skb, const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const nla[])
|
const struct nlattr * const nla[])
|
||||||
{
|
{
|
||||||
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
||||||
struct nft_af_info *afi;
|
struct nft_af_info *afi;
|
||||||
struct net *net = sock_net(skb->sk);
|
|
||||||
struct nft_table *table;
|
struct nft_table *table;
|
||||||
struct nft_chain *chain;
|
struct nft_chain *chain;
|
||||||
struct nft_rule *rule, *old_rule = NULL;
|
struct nft_rule *rule, *old_rule = NULL;
|
||||||
|
@ -2075,7 +2071,7 @@ static int nf_tables_newrule(struct sock *nlsk, struct sk_buff *skb,
|
||||||
return PTR_ERR(old_rule);
|
return PTR_ERR(old_rule);
|
||||||
}
|
}
|
||||||
|
|
||||||
nft_ctx_init(&ctx, skb, nlh, afi, table, chain, nla);
|
nft_ctx_init(&ctx, net, skb, nlh, afi, table, chain, nla);
|
||||||
|
|
||||||
n = 0;
|
n = 0;
|
||||||
size = 0;
|
size = 0;
|
||||||
|
@ -2176,13 +2172,12 @@ static int nf_tables_newrule(struct sock *nlsk, struct sk_buff *skb,
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int nf_tables_delrule(struct sock *nlsk, struct sk_buff *skb,
|
static int nf_tables_delrule(struct net *net, struct sock *nlsk,
|
||||||
const struct nlmsghdr *nlh,
|
struct sk_buff *skb, const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const nla[])
|
const struct nlattr * const nla[])
|
||||||
{
|
{
|
||||||
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
||||||
struct nft_af_info *afi;
|
struct nft_af_info *afi;
|
||||||
struct net *net = sock_net(skb->sk);
|
|
||||||
struct nft_table *table;
|
struct nft_table *table;
|
||||||
struct nft_chain *chain = NULL;
|
struct nft_chain *chain = NULL;
|
||||||
struct nft_rule *rule;
|
struct nft_rule *rule;
|
||||||
|
@ -2205,7 +2200,7 @@ static int nf_tables_delrule(struct sock *nlsk, struct sk_buff *skb,
|
||||||
return PTR_ERR(chain);
|
return PTR_ERR(chain);
|
||||||
}
|
}
|
||||||
|
|
||||||
nft_ctx_init(&ctx, skb, nlh, afi, table, chain, nla);
|
nft_ctx_init(&ctx, net, skb, nlh, afi, table, chain, nla);
|
||||||
|
|
||||||
if (chain) {
|
if (chain) {
|
||||||
if (nla[NFTA_RULE_HANDLE]) {
|
if (nla[NFTA_RULE_HANDLE]) {
|
||||||
|
@ -2344,12 +2339,11 @@ static const struct nla_policy nft_set_desc_policy[NFTA_SET_DESC_MAX + 1] = {
|
||||||
[NFTA_SET_DESC_SIZE] = { .type = NLA_U32 },
|
[NFTA_SET_DESC_SIZE] = { .type = NLA_U32 },
|
||||||
};
|
};
|
||||||
|
|
||||||
static int nft_ctx_init_from_setattr(struct nft_ctx *ctx,
|
static int nft_ctx_init_from_setattr(struct nft_ctx *ctx, struct net *net,
|
||||||
const struct sk_buff *skb,
|
const struct sk_buff *skb,
|
||||||
const struct nlmsghdr *nlh,
|
const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const nla[])
|
const struct nlattr * const nla[])
|
||||||
{
|
{
|
||||||
struct net *net = sock_net(skb->sk);
|
|
||||||
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
||||||
struct nft_af_info *afi = NULL;
|
struct nft_af_info *afi = NULL;
|
||||||
struct nft_table *table = NULL;
|
struct nft_table *table = NULL;
|
||||||
|
@ -2371,7 +2365,7 @@ static int nft_ctx_init_from_setattr(struct nft_ctx *ctx,
|
||||||
return -ENOENT;
|
return -ENOENT;
|
||||||
}
|
}
|
||||||
|
|
||||||
nft_ctx_init(ctx, skb, nlh, afi, table, NULL, nla);
|
nft_ctx_init(ctx, net, skb, nlh, afi, table, NULL, nla);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2623,6 +2617,7 @@ static int nf_tables_getset(struct sock *nlsk, struct sk_buff *skb,
|
||||||
const struct nlmsghdr *nlh,
|
const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const nla[])
|
const struct nlattr * const nla[])
|
||||||
{
|
{
|
||||||
|
struct net *net = sock_net(skb->sk);
|
||||||
const struct nft_set *set;
|
const struct nft_set *set;
|
||||||
struct nft_ctx ctx;
|
struct nft_ctx ctx;
|
||||||
struct sk_buff *skb2;
|
struct sk_buff *skb2;
|
||||||
|
@ -2630,7 +2625,7 @@ static int nf_tables_getset(struct sock *nlsk, struct sk_buff *skb,
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
/* Verify existence before starting dump */
|
/* Verify existence before starting dump */
|
||||||
err = nft_ctx_init_from_setattr(&ctx, skb, nlh, nla);
|
err = nft_ctx_init_from_setattr(&ctx, net, skb, nlh, nla);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
|
@ -2693,14 +2688,13 @@ static int nf_tables_set_desc_parse(const struct nft_ctx *ctx,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int nf_tables_newset(struct sock *nlsk, struct sk_buff *skb,
|
static int nf_tables_newset(struct net *net, struct sock *nlsk,
|
||||||
const struct nlmsghdr *nlh,
|
struct sk_buff *skb, const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const nla[])
|
const struct nlattr * const nla[])
|
||||||
{
|
{
|
||||||
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
||||||
const struct nft_set_ops *ops;
|
const struct nft_set_ops *ops;
|
||||||
struct nft_af_info *afi;
|
struct nft_af_info *afi;
|
||||||
struct net *net = sock_net(skb->sk);
|
|
||||||
struct nft_table *table;
|
struct nft_table *table;
|
||||||
struct nft_set *set;
|
struct nft_set *set;
|
||||||
struct nft_ctx ctx;
|
struct nft_ctx ctx;
|
||||||
|
@ -2798,7 +2792,7 @@ static int nf_tables_newset(struct sock *nlsk, struct sk_buff *skb,
|
||||||
if (IS_ERR(table))
|
if (IS_ERR(table))
|
||||||
return PTR_ERR(table);
|
return PTR_ERR(table);
|
||||||
|
|
||||||
nft_ctx_init(&ctx, skb, nlh, afi, table, NULL, nla);
|
nft_ctx_init(&ctx, net, skb, nlh, afi, table, NULL, nla);
|
||||||
|
|
||||||
set = nf_tables_set_lookup(table, nla[NFTA_SET_NAME]);
|
set = nf_tables_set_lookup(table, nla[NFTA_SET_NAME]);
|
||||||
if (IS_ERR(set)) {
|
if (IS_ERR(set)) {
|
||||||
|
@ -2882,8 +2876,8 @@ static void nf_tables_set_destroy(const struct nft_ctx *ctx, struct nft_set *set
|
||||||
nft_set_destroy(set);
|
nft_set_destroy(set);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int nf_tables_delset(struct sock *nlsk, struct sk_buff *skb,
|
static int nf_tables_delset(struct net *net, struct sock *nlsk,
|
||||||
const struct nlmsghdr *nlh,
|
struct sk_buff *skb, const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const nla[])
|
const struct nlattr * const nla[])
|
||||||
{
|
{
|
||||||
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
||||||
|
@ -2896,7 +2890,7 @@ static int nf_tables_delset(struct sock *nlsk, struct sk_buff *skb,
|
||||||
if (nla[NFTA_SET_TABLE] == NULL)
|
if (nla[NFTA_SET_TABLE] == NULL)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
err = nft_ctx_init_from_setattr(&ctx, skb, nlh, nla);
|
err = nft_ctx_init_from_setattr(&ctx, net, skb, nlh, nla);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
|
@ -3024,7 +3018,7 @@ static const struct nla_policy nft_set_elem_list_policy[NFTA_SET_ELEM_LIST_MAX +
|
||||||
[NFTA_SET_ELEM_LIST_SET_ID] = { .type = NLA_U32 },
|
[NFTA_SET_ELEM_LIST_SET_ID] = { .type = NLA_U32 },
|
||||||
};
|
};
|
||||||
|
|
||||||
static int nft_ctx_init_from_elemattr(struct nft_ctx *ctx,
|
static int nft_ctx_init_from_elemattr(struct nft_ctx *ctx, struct net *net,
|
||||||
const struct sk_buff *skb,
|
const struct sk_buff *skb,
|
||||||
const struct nlmsghdr *nlh,
|
const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const nla[],
|
const struct nlattr * const nla[],
|
||||||
|
@ -3033,7 +3027,6 @@ static int nft_ctx_init_from_elemattr(struct nft_ctx *ctx,
|
||||||
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
const struct nfgenmsg *nfmsg = nlmsg_data(nlh);
|
||||||
struct nft_af_info *afi;
|
struct nft_af_info *afi;
|
||||||
struct nft_table *table;
|
struct nft_table *table;
|
||||||
struct net *net = sock_net(skb->sk);
|
|
||||||
|
|
||||||
afi = nf_tables_afinfo_lookup(net, nfmsg->nfgen_family, false);
|
afi = nf_tables_afinfo_lookup(net, nfmsg->nfgen_family, false);
|
||||||
if (IS_ERR(afi))
|
if (IS_ERR(afi))
|
||||||
|
@ -3045,7 +3038,7 @@ static int nft_ctx_init_from_elemattr(struct nft_ctx *ctx,
|
||||||
if (!trans && (table->flags & NFT_TABLE_INACTIVE))
|
if (!trans && (table->flags & NFT_TABLE_INACTIVE))
|
||||||
return -ENOENT;
|
return -ENOENT;
|
||||||
|
|
||||||
nft_ctx_init(ctx, skb, nlh, afi, table, NULL, nla);
|
nft_ctx_init(ctx, net, skb, nlh, afi, table, NULL, nla);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3135,6 +3128,7 @@ static int nf_tables_dump_setelem(const struct nft_ctx *ctx,
|
||||||
|
|
||||||
static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
|
static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
|
||||||
{
|
{
|
||||||
|
struct net *net = sock_net(skb->sk);
|
||||||
const struct nft_set *set;
|
const struct nft_set *set;
|
||||||
struct nft_set_dump_args args;
|
struct nft_set_dump_args args;
|
||||||
struct nft_ctx ctx;
|
struct nft_ctx ctx;
|
||||||
|
@ -3150,8 +3144,8 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
err = nft_ctx_init_from_elemattr(&ctx, cb->skb, cb->nlh, (void *)nla,
|
err = nft_ctx_init_from_elemattr(&ctx, net, cb->skb, cb->nlh,
|
||||||
false);
|
(void *)nla, false);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
|
@ -3212,11 +3206,12 @@ static int nf_tables_getsetelem(struct sock *nlsk, struct sk_buff *skb,
|
||||||
const struct nlmsghdr *nlh,
|
const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const nla[])
|
const struct nlattr * const nla[])
|
||||||
{
|
{
|
||||||
|
struct net *net = sock_net(skb->sk);
|
||||||
const struct nft_set *set;
|
const struct nft_set *set;
|
||||||
struct nft_ctx ctx;
|
struct nft_ctx ctx;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
err = nft_ctx_init_from_elemattr(&ctx, skb, nlh, nla, false);
|
err = nft_ctx_init_from_elemattr(&ctx, net, skb, nlh, nla, false);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
|
@ -3528,11 +3523,10 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int nf_tables_newsetelem(struct sock *nlsk, struct sk_buff *skb,
|
static int nf_tables_newsetelem(struct net *net, struct sock *nlsk,
|
||||||
const struct nlmsghdr *nlh,
|
struct sk_buff *skb, const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const nla[])
|
const struct nlattr * const nla[])
|
||||||
{
|
{
|
||||||
struct net *net = sock_net(skb->sk);
|
|
||||||
const struct nlattr *attr;
|
const struct nlattr *attr;
|
||||||
struct nft_set *set;
|
struct nft_set *set;
|
||||||
struct nft_ctx ctx;
|
struct nft_ctx ctx;
|
||||||
|
@ -3541,7 +3535,7 @@ static int nf_tables_newsetelem(struct sock *nlsk, struct sk_buff *skb,
|
||||||
if (nla[NFTA_SET_ELEM_LIST_ELEMENTS] == NULL)
|
if (nla[NFTA_SET_ELEM_LIST_ELEMENTS] == NULL)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
err = nft_ctx_init_from_elemattr(&ctx, skb, nlh, nla, true);
|
err = nft_ctx_init_from_elemattr(&ctx, net, skb, nlh, nla, true);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
|
@ -3623,8 +3617,8 @@ static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set,
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int nf_tables_delsetelem(struct sock *nlsk, struct sk_buff *skb,
|
static int nf_tables_delsetelem(struct net *net, struct sock *nlsk,
|
||||||
const struct nlmsghdr *nlh,
|
struct sk_buff *skb, const struct nlmsghdr *nlh,
|
||||||
const struct nlattr * const nla[])
|
const struct nlattr * const nla[])
|
||||||
{
|
{
|
||||||
const struct nlattr *attr;
|
const struct nlattr *attr;
|
||||||
|
@ -3635,7 +3629,7 @@ static int nf_tables_delsetelem(struct sock *nlsk, struct sk_buff *skb,
|
||||||
if (nla[NFTA_SET_ELEM_LIST_ELEMENTS] == NULL)
|
if (nla[NFTA_SET_ELEM_LIST_ELEMENTS] == NULL)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
err = nft_ctx_init_from_elemattr(&ctx, skb, nlh, nla, false);
|
err = nft_ctx_init_from_elemattr(&ctx, net, skb, nlh, nla, false);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
|
@ -4030,7 +4024,8 @@ static int nf_tables_abort(struct sk_buff *skb)
|
||||||
struct nft_trans *trans, *next;
|
struct nft_trans *trans, *next;
|
||||||
struct nft_trans_elem *te;
|
struct nft_trans_elem *te;
|
||||||
|
|
||||||
list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) {
|
list_for_each_entry_safe_reverse(trans, next, &net->nft.commit_list,
|
||||||
|
list) {
|
||||||
switch (trans->msg_type) {
|
switch (trans->msg_type) {
|
||||||
case NFT_MSG_NEWTABLE:
|
case NFT_MSG_NEWTABLE:
|
||||||
if (nft_trans_table_update(trans)) {
|
if (nft_trans_table_update(trans)) {
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue