2007-01-18 20:04:14 -07:00
|
|
|
/*
|
2008-10-13 19:47:30 -06:00
|
|
|
* Copyright (c) 2003-2008 Chelsio, Inc. All rights reserved.
|
2007-01-18 20:04:14 -07:00
|
|
|
*
|
|
|
|
* This software is available to you under a choice of one of two
|
|
|
|
* licenses. You may choose to be licensed under the terms of the GNU
|
|
|
|
* General Public License (GPL) Version 2, available from the file
|
|
|
|
* COPYING in the main directory of this source tree, or the
|
|
|
|
* OpenIB.org BSD license below:
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or
|
|
|
|
* without modification, are permitted provided that the following
|
|
|
|
* conditions are met:
|
|
|
|
*
|
|
|
|
* - Redistributions of source code must retain the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer.
|
|
|
|
*
|
|
|
|
* - Redistributions in binary form must reproduce the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer in the documentation and/or other materials
|
|
|
|
* provided with the distribution.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
|
|
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
|
|
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
|
|
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
|
|
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
|
|
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
|
|
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
|
|
* SOFTWARE.
|
|
|
|
*/
|
|
|
|
#include <linux/skbuff.h>
|
|
|
|
#include <linux/netdevice.h>
|
|
|
|
#include <linux/if.h>
|
|
|
|
#include <linux/if_vlan.h>
|
|
|
|
#include <linux/jhash.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 02:04:11 -06:00
|
|
|
#include <linux/slab.h>
|
2007-01-18 20:04:14 -07:00
|
|
|
#include <net/neighbour.h>
|
|
|
|
#include "common.h"
|
|
|
|
#include "t3cdev.h"
|
|
|
|
#include "cxgb3_defs.h"
|
|
|
|
#include "l2t.h"
|
|
|
|
#include "t3_cpl.h"
|
|
|
|
#include "firmware_exports.h"
|
|
|
|
|
|
|
|
#define VLAN_NONE 0xfff
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Module locking notes: There is a RW lock protecting the L2 table as a
|
|
|
|
* whole plus a spinlock per L2T entry. Entry lookups and allocations happen
|
|
|
|
* under the protection of the table lock, individual entry changes happen
|
|
|
|
* while holding that entry's spinlock. The table lock nests outside the
|
|
|
|
* entry locks. Allocations of new entries take the table lock as writers so
|
|
|
|
* no other lookups can happen while allocating new entries. Entry updates
|
|
|
|
* take the table lock as readers so multiple entries can be updated in
|
|
|
|
* parallel. An L2T entry can be dropped by decrementing its reference count
|
|
|
|
* and therefore can happen in parallel with entry allocation but no entry
|
|
|
|
* can change state or increment its ref count during allocation as both of
|
|
|
|
* these perform lookups.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static inline unsigned int vlan_prio(const struct l2t_entry *e)
|
|
|
|
{
|
|
|
|
return e->vlan >> 13;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned int arp_hash(u32 key, int ifindex,
|
|
|
|
const struct l2t_data *d)
|
|
|
|
{
|
|
|
|
return jhash_2words(key, ifindex, 0) & (d->nentries - 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void neigh_replace(struct l2t_entry *e, struct neighbour *n)
|
|
|
|
{
|
|
|
|
neigh_hold(n);
|
|
|
|
if (e->neigh)
|
|
|
|
neigh_release(e->neigh);
|
|
|
|
e->neigh = n;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set up an L2T entry and send any packets waiting in the arp queue. The
|
|
|
|
* supplied skb is used for the CPL_L2T_WRITE_REQ. Must be called with the
|
|
|
|
* entry locked.
|
|
|
|
*/
|
|
|
|
static int setup_l2e_send_pending(struct t3cdev *dev, struct sk_buff *skb,
|
|
|
|
struct l2t_entry *e)
|
|
|
|
{
|
|
|
|
struct cpl_l2t_write_req *req;
|
2008-09-22 02:29:52 -06:00
|
|
|
struct sk_buff *tmp;
|
2007-01-18 20:04:14 -07:00
|
|
|
|
|
|
|
if (!skb) {
|
|
|
|
skb = alloc_skb(sizeof(*req), GFP_ATOMIC);
|
|
|
|
if (!skb)
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
req = (struct cpl_l2t_write_req *)__skb_put(skb, sizeof(*req));
|
|
|
|
req->wr.wr_hi = htonl(V_WR_OP(FW_WROPCODE_FORWARD));
|
|
|
|
OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_L2T_WRITE_REQ, e->idx));
|
|
|
|
req->params = htonl(V_L2T_W_IDX(e->idx) | V_L2T_W_IFF(e->smt_idx) |
|
|
|
|
V_L2T_W_VLAN(e->vlan & VLAN_VID_MASK) |
|
|
|
|
V_L2T_W_PRIO(vlan_prio(e)));
|
|
|
|
memcpy(e->dmac, e->neigh->ha, sizeof(e->dmac));
|
|
|
|
memcpy(req->dst_mac, e->dmac, sizeof(req->dst_mac));
|
|
|
|
skb->priority = CPL_PRIORITY_CONTROL;
|
|
|
|
cxgb3_ofld_send(dev, skb);
|
2008-09-22 02:29:52 -06:00
|
|
|
|
|
|
|
skb_queue_walk_safe(&e->arpq, skb, tmp) {
|
|
|
|
__skb_unlink(skb, &e->arpq);
|
2007-01-18 20:04:14 -07:00
|
|
|
cxgb3_ofld_send(dev, skb);
|
|
|
|
}
|
|
|
|
e->state = L2T_STATE_VALID;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Add a packet to the an L2T entry's queue of packets awaiting resolution.
|
|
|
|
* Must be called with the entry's lock held.
|
|
|
|
*/
|
|
|
|
static inline void arpq_enqueue(struct l2t_entry *e, struct sk_buff *skb)
|
|
|
|
{
|
2008-09-22 02:29:52 -06:00
|
|
|
__skb_queue_tail(&e->arpq, skb);
|
2007-01-18 20:04:14 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
int t3_l2t_send_slow(struct t3cdev *dev, struct sk_buff *skb,
|
|
|
|
struct l2t_entry *e)
|
|
|
|
{
|
|
|
|
again:
|
|
|
|
switch (e->state) {
|
|
|
|
case L2T_STATE_STALE: /* entry is stale, kick off revalidation */
|
|
|
|
neigh_event_send(e->neigh, NULL);
|
|
|
|
spin_lock_bh(&e->lock);
|
|
|
|
if (e->state == L2T_STATE_STALE)
|
|
|
|
e->state = L2T_STATE_VALID;
|
|
|
|
spin_unlock_bh(&e->lock);
|
|
|
|
case L2T_STATE_VALID: /* fast-path, send the packet on */
|
|
|
|
return cxgb3_ofld_send(dev, skb);
|
|
|
|
case L2T_STATE_RESOLVING:
|
|
|
|
spin_lock_bh(&e->lock);
|
|
|
|
if (e->state != L2T_STATE_RESOLVING) {
|
|
|
|
/* ARP already completed */
|
|
|
|
spin_unlock_bh(&e->lock);
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
arpq_enqueue(e, skb);
|
|
|
|
spin_unlock_bh(&e->lock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Only the first packet added to the arpq should kick off
|
|
|
|
* resolution. However, because the alloc_skb below can fail,
|
|
|
|
* we allow each packet added to the arpq to retry resolution
|
|
|
|
* as a way of recovering from transient memory exhaustion.
|
|
|
|
* A better way would be to use a work request to retry L2T
|
|
|
|
* entries when there's no memory.
|
|
|
|
*/
|
|
|
|
if (!neigh_event_send(e->neigh, NULL)) {
|
|
|
|
skb = alloc_skb(sizeof(struct cpl_l2t_write_req),
|
|
|
|
GFP_ATOMIC);
|
|
|
|
if (!skb)
|
|
|
|
break;
|
|
|
|
|
|
|
|
spin_lock_bh(&e->lock);
|
2008-09-22 02:29:52 -06:00
|
|
|
if (!skb_queue_empty(&e->arpq))
|
2007-01-18 20:04:14 -07:00
|
|
|
setup_l2e_send_pending(dev, skb, e);
|
|
|
|
else /* we lost the race */
|
|
|
|
__kfree_skb(skb);
|
|
|
|
spin_unlock_bh(&e->lock);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
EXPORT_SYMBOL(t3_l2t_send_slow);
|
|
|
|
|
|
|
|
void t3_l2t_send_event(struct t3cdev *dev, struct l2t_entry *e)
|
|
|
|
{
|
|
|
|
again:
|
|
|
|
switch (e->state) {
|
|
|
|
case L2T_STATE_STALE: /* entry is stale, kick off revalidation */
|
|
|
|
neigh_event_send(e->neigh, NULL);
|
|
|
|
spin_lock_bh(&e->lock);
|
|
|
|
if (e->state == L2T_STATE_STALE) {
|
|
|
|
e->state = L2T_STATE_VALID;
|
|
|
|
}
|
|
|
|
spin_unlock_bh(&e->lock);
|
|
|
|
return;
|
|
|
|
case L2T_STATE_VALID: /* fast-path, send the packet on */
|
|
|
|
return;
|
|
|
|
case L2T_STATE_RESOLVING:
|
|
|
|
spin_lock_bh(&e->lock);
|
|
|
|
if (e->state != L2T_STATE_RESOLVING) {
|
|
|
|
/* ARP already completed */
|
|
|
|
spin_unlock_bh(&e->lock);
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
spin_unlock_bh(&e->lock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Only the first packet added to the arpq should kick off
|
|
|
|
* resolution. However, because the alloc_skb below can fail,
|
|
|
|
* we allow each packet added to the arpq to retry resolution
|
|
|
|
* as a way of recovering from transient memory exhaustion.
|
|
|
|
* A better way would be to use a work request to retry L2T
|
|
|
|
* entries when there's no memory.
|
|
|
|
*/
|
|
|
|
neigh_event_send(e->neigh, NULL);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
EXPORT_SYMBOL(t3_l2t_send_event);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocate a free L2T entry. Must be called with l2t_data.lock held.
|
|
|
|
*/
|
|
|
|
static struct l2t_entry *alloc_l2e(struct l2t_data *d)
|
|
|
|
{
|
|
|
|
struct l2t_entry *end, *e, **p;
|
|
|
|
|
|
|
|
if (!atomic_read(&d->nfree))
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/* there's definitely a free entry */
|
|
|
|
for (e = d->rover, end = &d->l2tab[d->nentries]; e != end; ++e)
|
|
|
|
if (atomic_read(&e->refcnt) == 0)
|
|
|
|
goto found;
|
|
|
|
|
|
|
|
for (e = &d->l2tab[1]; atomic_read(&e->refcnt); ++e) ;
|
|
|
|
found:
|
|
|
|
d->rover = e + 1;
|
|
|
|
atomic_dec(&d->nfree);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The entry we found may be an inactive entry that is
|
|
|
|
* presently in the hash table. We need to remove it.
|
|
|
|
*/
|
|
|
|
if (e->state != L2T_STATE_UNUSED) {
|
|
|
|
int hash = arp_hash(e->addr, e->ifindex, d);
|
|
|
|
|
|
|
|
for (p = &d->l2tab[hash].first; *p; p = &(*p)->next)
|
|
|
|
if (*p == e) {
|
|
|
|
*p = e->next;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
e->state = L2T_STATE_UNUSED;
|
|
|
|
}
|
|
|
|
return e;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Called when an L2T entry has no more users. The entry is left in the hash
|
|
|
|
* table since it is likely to be reused but we also bump nfree to indicate
|
|
|
|
* that the entry can be reallocated for a different neighbor. We also drop
|
|
|
|
* the existing neighbor reference in case the neighbor is going away and is
|
|
|
|
* waiting on our reference.
|
|
|
|
*
|
|
|
|
* Because entries can be reallocated to other neighbors once their ref count
|
|
|
|
* drops to 0 we need to take the entry's lock to avoid races with a new
|
|
|
|
* incarnation.
|
|
|
|
*/
|
|
|
|
void t3_l2e_free(struct l2t_data *d, struct l2t_entry *e)
|
|
|
|
{
|
|
|
|
spin_lock_bh(&e->lock);
|
|
|
|
if (atomic_read(&e->refcnt) == 0) { /* hasn't been recycled */
|
|
|
|
if (e->neigh) {
|
|
|
|
neigh_release(e->neigh);
|
|
|
|
e->neigh = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
spin_unlock_bh(&e->lock);
|
|
|
|
atomic_inc(&d->nfree);
|
|
|
|
}
|
|
|
|
|
|
|
|
EXPORT_SYMBOL(t3_l2e_free);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Update an L2T entry that was previously used for the same next hop as neigh.
|
|
|
|
* Must be called with softirqs disabled.
|
|
|
|
*/
|
|
|
|
static inline void reuse_entry(struct l2t_entry *e, struct neighbour *neigh)
|
|
|
|
{
|
|
|
|
unsigned int nud_state;
|
|
|
|
|
|
|
|
spin_lock(&e->lock); /* avoid race with t3_l2t_free */
|
|
|
|
|
|
|
|
if (neigh != e->neigh)
|
|
|
|
neigh_replace(e, neigh);
|
|
|
|
nud_state = neigh->nud_state;
|
|
|
|
if (memcmp(e->dmac, neigh->ha, sizeof(e->dmac)) ||
|
|
|
|
!(nud_state & NUD_VALID))
|
|
|
|
e->state = L2T_STATE_RESOLVING;
|
|
|
|
else if (nud_state & NUD_CONNECTED)
|
|
|
|
e->state = L2T_STATE_VALID;
|
|
|
|
else
|
|
|
|
e->state = L2T_STATE_STALE;
|
|
|
|
spin_unlock(&e->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct neighbour *neigh,
|
|
|
|
struct net_device *dev)
|
|
|
|
{
|
|
|
|
struct l2t_entry *e;
|
|
|
|
struct l2t_data *d = L2DATA(cdev);
|
|
|
|
u32 addr = *(u32 *) neigh->primary_key;
|
|
|
|
int ifidx = neigh->dev->ifindex;
|
|
|
|
int hash = arp_hash(addr, ifidx, d);
|
|
|
|
struct port_info *p = netdev_priv(dev);
|
|
|
|
int smt_idx = p->port_id;
|
|
|
|
|
|
|
|
write_lock_bh(&d->lock);
|
|
|
|
for (e = d->l2tab[hash].first; e; e = e->next)
|
|
|
|
if (e->addr == addr && e->ifindex == ifidx &&
|
|
|
|
e->smt_idx == smt_idx) {
|
|
|
|
l2t_hold(d, e);
|
|
|
|
if (atomic_read(&e->refcnt) == 1)
|
|
|
|
reuse_entry(e, neigh);
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Need to allocate a new entry */
|
|
|
|
e = alloc_l2e(d);
|
|
|
|
if (e) {
|
|
|
|
spin_lock(&e->lock); /* avoid race with t3_l2t_free */
|
|
|
|
e->next = d->l2tab[hash].first;
|
|
|
|
d->l2tab[hash].first = e;
|
|
|
|
e->state = L2T_STATE_RESOLVING;
|
|
|
|
e->addr = addr;
|
|
|
|
e->ifindex = ifidx;
|
|
|
|
e->smt_idx = smt_idx;
|
|
|
|
atomic_set(&e->refcnt, 1);
|
|
|
|
neigh_replace(e, neigh);
|
|
|
|
if (neigh->dev->priv_flags & IFF_802_1Q_VLAN)
|
2008-07-08 04:23:57 -06:00
|
|
|
e->vlan = vlan_dev_vlan_id(neigh->dev);
|
2007-01-18 20:04:14 -07:00
|
|
|
else
|
|
|
|
e->vlan = VLAN_NONE;
|
|
|
|
spin_unlock(&e->lock);
|
|
|
|
}
|
|
|
|
done:
|
|
|
|
write_unlock_bh(&d->lock);
|
|
|
|
return e;
|
|
|
|
}
|
|
|
|
|
|
|
|
EXPORT_SYMBOL(t3_l2t_get);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Called when address resolution fails for an L2T entry to handle packets
|
|
|
|
* on the arpq head. If a packet specifies a failure handler it is invoked,
|
|
|
|
* otherwise the packets is sent to the offload device.
|
|
|
|
*
|
|
|
|
* XXX: maybe we should abandon the latter behavior and just require a failure
|
|
|
|
* handler.
|
|
|
|
*/
|
2008-09-22 02:29:52 -06:00
|
|
|
static void handle_failed_resolution(struct t3cdev *dev, struct sk_buff_head *arpq)
|
2007-01-18 20:04:14 -07:00
|
|
|
{
|
2008-09-22 02:29:52 -06:00
|
|
|
struct sk_buff *skb, *tmp;
|
|
|
|
|
|
|
|
skb_queue_walk_safe(arpq, skb, tmp) {
|
2007-01-18 20:04:14 -07:00
|
|
|
struct l2t_skb_cb *cb = L2T_SKB_CB(skb);
|
|
|
|
|
2008-09-22 02:29:52 -06:00
|
|
|
__skb_unlink(skb, arpq);
|
2007-01-18 20:04:14 -07:00
|
|
|
if (cb->arp_failure_handler)
|
|
|
|
cb->arp_failure_handler(dev, skb);
|
|
|
|
else
|
|
|
|
cxgb3_ofld_send(dev, skb);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Called when the host's ARP layer makes a change to some entry that is
|
|
|
|
* loaded into the HW L2 table.
|
|
|
|
*/
|
|
|
|
void t3_l2t_update(struct t3cdev *dev, struct neighbour *neigh)
|
|
|
|
{
|
2008-09-22 02:29:52 -06:00
|
|
|
struct sk_buff_head arpq;
|
2007-01-18 20:04:14 -07:00
|
|
|
struct l2t_entry *e;
|
|
|
|
struct l2t_data *d = L2DATA(dev);
|
|
|
|
u32 addr = *(u32 *) neigh->primary_key;
|
|
|
|
int ifidx = neigh->dev->ifindex;
|
|
|
|
int hash = arp_hash(addr, ifidx, d);
|
|
|
|
|
|
|
|
read_lock_bh(&d->lock);
|
|
|
|
for (e = d->l2tab[hash].first; e; e = e->next)
|
|
|
|
if (e->addr == addr && e->ifindex == ifidx) {
|
|
|
|
spin_lock(&e->lock);
|
|
|
|
goto found;
|
|
|
|
}
|
|
|
|
read_unlock_bh(&d->lock);
|
|
|
|
return;
|
|
|
|
|
|
|
|
found:
|
2008-09-22 02:29:52 -06:00
|
|
|
__skb_queue_head_init(&arpq);
|
|
|
|
|
2007-01-18 20:04:14 -07:00
|
|
|
read_unlock(&d->lock);
|
|
|
|
if (atomic_read(&e->refcnt)) {
|
|
|
|
if (neigh != e->neigh)
|
|
|
|
neigh_replace(e, neigh);
|
|
|
|
|
|
|
|
if (e->state == L2T_STATE_RESOLVING) {
|
|
|
|
if (neigh->nud_state & NUD_FAILED) {
|
2008-09-22 02:29:52 -06:00
|
|
|
skb_queue_splice_init(&e->arpq, &arpq);
|
2008-02-06 11:05:19 -07:00
|
|
|
} else if (neigh->nud_state & (NUD_CONNECTED|NUD_STALE))
|
2007-01-18 20:04:14 -07:00
|
|
|
setup_l2e_send_pending(dev, NULL, e);
|
|
|
|
} else {
|
2008-03-03 22:55:03 -07:00
|
|
|
e->state = neigh->nud_state & NUD_CONNECTED ?
|
2007-01-18 20:04:14 -07:00
|
|
|
L2T_STATE_VALID : L2T_STATE_STALE;
|
|
|
|
if (memcmp(e->dmac, neigh->ha, 6))
|
|
|
|
setup_l2e_send_pending(dev, NULL, e);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
spin_unlock_bh(&e->lock);
|
|
|
|
|
2008-09-22 02:29:52 -06:00
|
|
|
if (!skb_queue_empty(&arpq))
|
|
|
|
handle_failed_resolution(dev, &arpq);
|
2007-01-18 20:04:14 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
struct l2t_data *t3_init_l2t(unsigned int l2t_capacity)
|
|
|
|
{
|
|
|
|
struct l2t_data *d;
|
|
|
|
int i, size = sizeof(*d) + l2t_capacity * sizeof(struct l2t_entry);
|
|
|
|
|
|
|
|
d = cxgb_alloc_mem(size);
|
|
|
|
if (!d)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
d->nentries = l2t_capacity;
|
|
|
|
d->rover = &d->l2tab[1]; /* entry 0 is not used */
|
|
|
|
atomic_set(&d->nfree, l2t_capacity - 1);
|
|
|
|
rwlock_init(&d->lock);
|
|
|
|
|
|
|
|
for (i = 0; i < l2t_capacity; ++i) {
|
|
|
|
d->l2tab[i].idx = i;
|
|
|
|
d->l2tab[i].state = L2T_STATE_UNUSED;
|
2008-10-17 15:18:26 -06:00
|
|
|
__skb_queue_head_init(&d->l2tab[i].arpq);
|
2007-01-18 20:04:14 -07:00
|
|
|
spin_lock_init(&d->l2tab[i].lock);
|
|
|
|
atomic_set(&d->l2tab[i].refcnt, 0);
|
|
|
|
}
|
|
|
|
return d;
|
|
|
|
}
|
|
|
|
|
|
|
|
void t3_free_l2t(struct l2t_data *d)
|
|
|
|
{
|
|
|
|
cxgb_free_mem(d);
|
|
|
|
}
|
|
|
|
|