Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6: (75 commits)
  net: Handle NETREG_UNINITIALIZED devices correctly
  can: add the driver for Analog Devices Blackfin on-chip CAN controllers
  xfrm: Fix truncation length of authentication algorithms installed via PF_KEY
  net: use compat helper functions in compat_sys_recvmmsg
  net: fix compat_sys_recvmmsg parameter type
  cxgb3: Fixing EEH handlers
  cnic: Zero out status block and Event Queue indices.
  cnic: Send delete command when shutting down iSCSI ring.
  net: smc91x: Fix up type mismatch in smc_drv_resume().
  smc91x: fix unused flags warnings on UP systems
  MAINTAINERS: Transfering maintainership of cdc-ether
  net: Add missing TST_CFG_WRITE bits around sky2_pci_write
  net: Fix Yukon-2 Optima TCP offload setup
  net: niu uses crc32, so select CRC32
  wireless: update old static regulatory domain rules
  mac80211: Revert 'Use correct sign for mesh active path refresh'
  mac80211: Fixed bug in mesh portal paths
  net/mac80211: Correct size given to memset
  b43: Remove reset after fatal DMA error
  rtl8187: add radio led and fix warnings on suspend
  ...
This commit is contained in:
Linus Torvalds 2009-12-11 20:58:20 -08:00
commit 053fe57ac2
77 changed files with 1687 additions and 604 deletions

View file

@ -68,22 +68,38 @@ GigaSet 307x Device Driver
for troubleshooting or to pass module parameters.
The module ser_gigaset provides a serial line discipline N_GIGASET_M101
which drives the device through the regular serial line driver. It must
be attached to the serial line to which the M101 is connected with the
ldattach(8) command (requires util-linux-ng release 2.14 or later), for
example:
ldattach GIGASET_M101 /dev/ttyS1
which uses the regular serial port driver to access the device, and must
therefore be attached to the serial device to which the M101 is connected.
The ldattach(8) command (included in util-linux-ng release 2.14 or later)
can be used for that purpose, for example:
ldattach GIGASET_M101 /dev/ttyS1
This will open the device file, attach the line discipline to it, and
then sleep in the background, keeping the device open so that the line
discipline remains active. To deactivate it, kill the daemon, for example
with
killall ldattach
killall ldattach
before disconnecting the device. To have this happen automatically at
system startup/shutdown on an LSB compatible system, create and activate
an appropriate LSB startup script /etc/init.d/gigaset. (The init name
'gigaset' is officially assigned to this project by LANANA.)
Alternatively, just add the 'ldattach' command line to /etc/rc.local.
The modules accept the following parameters:
Module Parameter Meaning
gigaset debug debug level (see section 3.2.)
startmode initial operation mode (see section 2.5.):
bas_gigaset ) 1=ISDN4linux/CAPI (default), 0=Unimodem
ser_gigaset )
usb_gigaset ) cidmode initial Call-ID mode setting (see section
2.5.): 1=on (default), 0=off
Depending on your distribution you may want to create a separate module
configuration file /etc/modprobe.d/gigaset for these, or add them to a
custom file like /etc/modprobe.conf.local.
2.2. Device nodes for user space programs
------------------------------------
The device can be accessed from user space (eg. by the user space tools
@ -93,11 +109,48 @@ GigaSet 307x Device Driver
- /dev/ttyGU0 for M105 (USB data boxes)
- /dev/ttyGB0 for the base driver (direct USB connection)
You can also select a "default device" which is used by the frontends when
If you connect more than one device of a type, they will get consecutive
device nodes, eg. /dev/ttyGU1 for a second M105.
You can also set a "default device" for the user space tools to use when
no device node is given as parameter, by creating a symlink /dev/ttyG to
one of them, eg.:
ln -s /dev/ttyGB0 /dev/ttyG
ln -s /dev/ttyGB0 /dev/ttyG
The devices accept the following device specific ioctl calls
(defined in gigaset_dev.h):
ioctl(int fd, GIGASET_REDIR, int *cmd);
If cmd==1, the device is set to be controlled exclusively through the
character device node; access from the ISDN subsystem is blocked.
If cmd==0, the device is set to be used from the ISDN subsystem and does
not communicate through the character device node.
ioctl(int fd, GIGASET_CONFIG, int *cmd);
(ser_gigaset and usb_gigaset only)
If cmd==1, the device is set to adapter configuration mode where commands
are interpreted by the M10x DECT adapter itself instead of being
forwarded to the base station. In this mode, the device accepts the
commands described in Siemens document "AT-Kommando Alignment M10x Data"
for setting the operation mode, associating with a base station and
querying parameters like field strengh and signal quality.
Note that there is no ioctl command for leaving adapter configuration
mode and returning to regular operation. In order to leave adapter
configuration mode, write the command ATO to the device.
ioctl(int fd, GIGASET_BRKCHARS, unsigned char brkchars[6]);
(usb_gigaset only)
Set the break characters on an M105's internal serial adapter to the six
bytes stored in brkchars[]. Unused bytes should be set to zero.
ioctl(int fd, GIGASET_VERSION, unsigned version[4]);
Retrieve version information from the driver. version[0] must be set to
one of:
- GIGVER_DRIVER: retrieve driver version
- GIGVER_COMPAT: retrieve interface compatibility version
- GIGVER_FWBASE: retrieve the firmware version of the base
Upon return, version[] is filled with the requested version information.
2.3. ISDN4linux
----------
@ -113,15 +166,24 @@ GigaSet 307x Device Driver
Connection State: 0, Response: -1
gigaset_process_response: resp_code -1 in ConState 0 !
Timeout occurred
you might need to use unimodem mode. (see section 2.5.)
you probably need to use unimodem mode. (see section 2.5.)
2.4. CAPI
----
If the driver is compiled with CAPI support (kernel configuration option
GIGASET_CAPI, experimental) it can also be used with CAPI 2.0 kernel and
user space applications. ISDN4Linux is supported in this configuration
user space applications. For user space access, the module capi.ko must
be loaded. The capiinit command (included in the capi4k-utils package)
does this for you.
The CAPI variant of the driver supports legacy ISDN4Linux applications
via the capidrv compatibility driver. The kernel module capidrv.ko must
be loaded explicitly ("modprobe capidrv") if needed.
be loaded explicitly with the command
modprobe capidrv
if needed, and cannot be unloaded again without unloading the driver
first. (These are limitations of capidrv.)
The note about unimodem mode in the preceding section applies here, too.
2.5. Unimodem mode
-------------
@ -134,9 +196,14 @@ GigaSet 307x Device Driver
You can switch back using
gigacontr --mode isdn
You can also load the driver using e.g.
modprobe usb_gigaset startmode=0
to prevent the driver from starting in "isdn4linux mode".
You can also put the driver directly into Unimodem mode when it's loaded,
by passing the module parameter startmode=0 to the hardware specific
module, e.g.
modprobe usb_gigaset startmode=0
or by adding a line like
options usb_gigaset startmode=0
to an appropriate module configuration file, like /etc/modprobe.d/gigaset
or /etc/modprobe.conf.local.
In this mode the device works like a modem connected to a serial port
(the /dev/ttyGU0, ... mentioned above) which understands the commands
@ -164,9 +231,8 @@ GigaSet 307x Device Driver
options ppp_async flag_time=0
to /etc/modprobe.conf. If your distribution has some local module
configuration file like /etc/modprobe.conf.local,
using that should be preferred.
to an appropriate module configuration file, like /etc/modprobe.d/gigaset
or /etc/modprobe.conf.local.
2.6. Call-ID (CID) mode
------------------
@ -189,12 +255,13 @@ GigaSet 307x Device Driver
settings (CID mode).
- If you have several DECT data devices (M10x) which you want to use
in turn, select Unimodem mode by passing the parameter "cidmode=0" to
the driver ("modprobe usb_gigaset cidmode=0" or modprobe.conf).
the appropriate driver module (ser_gigaset or usb_gigaset).
If you want both of these at once, you are out of luck.
You can also use /sys/class/tty/ttyGxy/cidmode for changing the CID mode
setting (ttyGxy is ttyGU0 or ttyGB0).
You can also use the tty class parameter "cidmode" of the device to
change its CID mode while the driver is loaded, eg.
echo 0 > /sys/class/tty/ttyGU0/cidmode
2.7. Unregistered Wireless Devices (M101/M105)
-----------------------------------------
@ -208,7 +275,7 @@ GigaSet 307x Device Driver
driver. In that situation, a restricted set of functions is available
which includes, in particular, those necessary for registering the device
to a base or for switching it between Fixed Part and Portable Part
modes.
modes. See the gigacontr(8) manpage for details.
3. Troubleshooting
---------------
@ -222,9 +289,7 @@ GigaSet 307x Device Driver
options isdn dialtimeout=15
to /etc/modprobe.conf. If your distribution has some local module
configuration file like /etc/modprobe.conf.local,
using that should be preferred.
to /etc/modprobe.d/gigaset, /etc/modprobe.conf.local or a similar file.
Problem:
Your isdn script aborts with a message about isdnlog.
@ -264,7 +329,8 @@ GigaSet 307x Device Driver
The initial value can be set using the debug parameter when loading the
module "gigaset", e.g. by adding a line
options gigaset debug=0
to /etc/modprobe.conf, ...
to your module configuration file, eg. /etc/modprobe.d/gigaset or
/etc/modprobe.conf.local.
Generated debugging information can be found
- as output of the command

View file

@ -5439,10 +5439,9 @@ S: Supported
F: drivers/block/ub.c
USB CDC ETHERNET DRIVER
M: Greg Kroah-Hartman <greg@kroah.com>
M: Oliver Neukum <oliver@neukum.name>
L: linux-usb@vger.kernel.org
S: Maintained
W: http://www.kroah.com/linux-usb/
F: drivers/net/usb/cdc_*.c
F: include/linux/usb/cdc.h

View file

@ -2505,7 +2505,7 @@ he_close(struct atm_vcc *vcc)
* TBRQ, the host issues the close command to the adapter.
*/
while (((tx_inuse = atomic_read(&sk_atm(vcc)->sk_wmem_alloc)) > 0) &&
while (((tx_inuse = atomic_read(&sk_atm(vcc)->sk_wmem_alloc)) > 1) &&
(retry < MAX_RETRY)) {
msleep(sleep);
if (sleep < 250)
@ -2514,7 +2514,7 @@ he_close(struct atm_vcc *vcc)
++retry;
}
if (tx_inuse)
if (tx_inuse > 1)
hprintk("close tx cid 0x%x tx_inuse = %d\n", cid, tx_inuse);
/* 2.3.1.1 generic close operations with flush */

View file

@ -29,7 +29,7 @@
#endif
/* Module parameters */
int gigaset_debuglevel = DEBUG_DEFAULT;
int gigaset_debuglevel;
EXPORT_SYMBOL_GPL(gigaset_debuglevel);
module_param_named(debug, gigaset_debuglevel, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(debug, "debug level");

View file

@ -2696,6 +2696,7 @@ config NETXEN_NIC
config NIU
tristate "Sun Neptune 10Gbit Ethernet support"
depends on PCI
select CRC32
help
This enables support for cards based upon Sun's
Neptune chipset.

View file

@ -479,6 +479,9 @@ struct atl1c_buffer {
#define ATL1C_PCIMAP_PAGE 0x0008
#define ATL1C_PCIMAP_TYPE_MASK 0x000C
#define ATL1C_PCIMAP_TODEVICE 0x0010
#define ATL1C_PCIMAP_FROMDEVICE 0x0020
#define ATL1C_PCIMAP_DIRECTION_MASK 0x0030
dma_addr_t dma;
};
@ -487,9 +490,11 @@ struct atl1c_buffer {
((buff)->flags) |= (state); \
} while (0)
#define ATL1C_SET_PCIMAP_TYPE(buff, type) do { \
((buff)->flags) &= ~ATL1C_PCIMAP_TYPE_MASK; \
((buff)->flags) |= (type); \
#define ATL1C_SET_PCIMAP_TYPE(buff, type, direction) do { \
((buff)->flags) &= ~ATL1C_PCIMAP_TYPE_MASK; \
((buff)->flags) |= (type); \
((buff)->flags) &= ~ATL1C_PCIMAP_DIRECTION_MASK; \
((buff)->flags) |= (direction); \
} while (0)
/* transimit packet descriptor (tpd) ring */
@ -550,6 +555,9 @@ struct atl1c_adapter {
#define __AT_TESTING 0x0001
#define __AT_RESETTING 0x0002
#define __AT_DOWN 0x0003
u8 work_event;
#define ATL1C_WORK_EVENT_RESET 0x01
#define ATL1C_WORK_EVENT_LINK_CHANGE 0x02
u32 msg_enable;
bool have_msi;
@ -561,8 +569,7 @@ struct atl1c_adapter {
spinlock_t tx_lock;
atomic_t irq_sem;
struct work_struct reset_task;
struct work_struct link_chg_task;
struct work_struct common_task;
struct timer_list watchdog_timer;
struct timer_list phy_config_timer;

View file

@ -198,27 +198,12 @@ static void atl1c_phy_config(unsigned long data)
void atl1c_reinit_locked(struct atl1c_adapter *adapter)
{
WARN_ON(in_interrupt());
atl1c_down(adapter);
atl1c_up(adapter);
clear_bit(__AT_RESETTING, &adapter->flags);
}
static void atl1c_reset_task(struct work_struct *work)
{
struct atl1c_adapter *adapter;
struct net_device *netdev;
adapter = container_of(work, struct atl1c_adapter, reset_task);
netdev = adapter->netdev;
netif_device_detach(netdev);
atl1c_down(adapter);
atl1c_up(adapter);
netif_device_attach(netdev);
}
static void atl1c_check_link_status(struct atl1c_adapter *adapter)
{
struct atl1c_hw *hw = &adapter->hw;
@ -275,18 +260,6 @@ static void atl1c_check_link_status(struct atl1c_adapter *adapter)
}
}
/*
* atl1c_link_chg_task - deal with link change event Out of interrupt context
* @netdev: network interface device structure
*/
static void atl1c_link_chg_task(struct work_struct *work)
{
struct atl1c_adapter *adapter;
adapter = container_of(work, struct atl1c_adapter, link_chg_task);
atl1c_check_link_status(adapter);
}
static void atl1c_link_chg_event(struct atl1c_adapter *adapter)
{
struct net_device *netdev = adapter->netdev;
@ -311,19 +284,39 @@ static void atl1c_link_chg_event(struct atl1c_adapter *adapter)
adapter->link_speed = SPEED_0;
}
}
schedule_work(&adapter->link_chg_task);
adapter->work_event |= ATL1C_WORK_EVENT_LINK_CHANGE;
schedule_work(&adapter->common_task);
}
static void atl1c_common_task(struct work_struct *work)
{
struct atl1c_adapter *adapter;
struct net_device *netdev;
adapter = container_of(work, struct atl1c_adapter, common_task);
netdev = adapter->netdev;
if (adapter->work_event & ATL1C_WORK_EVENT_RESET) {
netif_device_detach(netdev);
atl1c_down(adapter);
atl1c_up(adapter);
netif_device_attach(netdev);
return;
}
if (adapter->work_event & ATL1C_WORK_EVENT_LINK_CHANGE)
atl1c_check_link_status(adapter);
return;
}
static void atl1c_del_timer(struct atl1c_adapter *adapter)
{
del_timer_sync(&adapter->phy_config_timer);
}
static void atl1c_cancel_work(struct atl1c_adapter *adapter)
{
cancel_work_sync(&adapter->reset_task);
cancel_work_sync(&adapter->link_chg_task);
}
/*
* atl1c_tx_timeout - Respond to a Tx Hang
@ -334,7 +327,8 @@ static void atl1c_tx_timeout(struct net_device *netdev)
struct atl1c_adapter *adapter = netdev_priv(netdev);
/* Do the reset outside of interrupt context */
schedule_work(&adapter->reset_task);
adapter->work_event |= ATL1C_WORK_EVENT_RESET;
schedule_work(&adapter->common_task);
}
/*
@ -713,15 +707,21 @@ static int __devinit atl1c_sw_init(struct atl1c_adapter *adapter)
static inline void atl1c_clean_buffer(struct pci_dev *pdev,
struct atl1c_buffer *buffer_info, int in_irq)
{
u16 pci_driection;
if (buffer_info->flags & ATL1C_BUFFER_FREE)
return;
if (buffer_info->dma) {
if (buffer_info->flags & ATL1C_PCIMAP_FROMDEVICE)
pci_driection = PCI_DMA_FROMDEVICE;
else
pci_driection = PCI_DMA_TODEVICE;
if (buffer_info->flags & ATL1C_PCIMAP_SINGLE)
pci_unmap_single(pdev, buffer_info->dma,
buffer_info->length, PCI_DMA_TODEVICE);
buffer_info->length, pci_driection);
else if (buffer_info->flags & ATL1C_PCIMAP_PAGE)
pci_unmap_page(pdev, buffer_info->dma,
buffer_info->length, PCI_DMA_TODEVICE);
buffer_info->length, pci_driection);
}
if (buffer_info->skb) {
if (in_irq)
@ -1533,7 +1533,8 @@ static irqreturn_t atl1c_intr(int irq, void *data)
/* reset MAC */
hw->intr_mask &= ~ISR_ERROR;
AT_WRITE_REG(hw, REG_IMR, hw->intr_mask);
schedule_work(&adapter->reset_task);
adapter->work_event |= ATL1C_WORK_EVENT_RESET;
schedule_work(&adapter->common_task);
break;
}
@ -1606,7 +1607,8 @@ static int atl1c_alloc_rx_buffer(struct atl1c_adapter *adapter, const int ringid
buffer_info->dma = pci_map_single(pdev, vir_addr,
buffer_info->length,
PCI_DMA_FROMDEVICE);
ATL1C_SET_PCIMAP_TYPE(buffer_info, ATL1C_PCIMAP_SINGLE);
ATL1C_SET_PCIMAP_TYPE(buffer_info, ATL1C_PCIMAP_SINGLE,
ATL1C_PCIMAP_FROMDEVICE);
rfd_desc->buffer_addr = cpu_to_le64(buffer_info->dma);
rfd_next_to_use = next_next;
if (++next_next == rfd_ring->count)
@ -1967,7 +1969,8 @@ static void atl1c_tx_map(struct atl1c_adapter *adapter,
buffer_info->dma = pci_map_single(adapter->pdev,
skb->data, hdr_len, PCI_DMA_TODEVICE);
ATL1C_SET_BUFFER_STATE(buffer_info, ATL1C_BUFFER_BUSY);
ATL1C_SET_PCIMAP_TYPE(buffer_info, ATL1C_PCIMAP_SINGLE);
ATL1C_SET_PCIMAP_TYPE(buffer_info, ATL1C_PCIMAP_SINGLE,
ATL1C_PCIMAP_TODEVICE);
mapped_len += map_len;
use_tpd->buffer_addr = cpu_to_le64(buffer_info->dma);
use_tpd->buffer_len = cpu_to_le16(buffer_info->length);
@ -1988,7 +1991,8 @@ static void atl1c_tx_map(struct atl1c_adapter *adapter,
pci_map_single(adapter->pdev, skb->data + mapped_len,
buffer_info->length, PCI_DMA_TODEVICE);
ATL1C_SET_BUFFER_STATE(buffer_info, ATL1C_BUFFER_BUSY);
ATL1C_SET_PCIMAP_TYPE(buffer_info, ATL1C_PCIMAP_SINGLE);
ATL1C_SET_PCIMAP_TYPE(buffer_info, ATL1C_PCIMAP_SINGLE,
ATL1C_PCIMAP_TODEVICE);
use_tpd->buffer_addr = cpu_to_le64(buffer_info->dma);
use_tpd->buffer_len = cpu_to_le16(buffer_info->length);
}
@ -2009,7 +2013,8 @@ static void atl1c_tx_map(struct atl1c_adapter *adapter,
buffer_info->length,
PCI_DMA_TODEVICE);
ATL1C_SET_BUFFER_STATE(buffer_info, ATL1C_BUFFER_BUSY);
ATL1C_SET_PCIMAP_TYPE(buffer_info, ATL1C_PCIMAP_PAGE);
ATL1C_SET_PCIMAP_TYPE(buffer_info, ATL1C_PCIMAP_PAGE,
ATL1C_PCIMAP_TODEVICE);
use_tpd->buffer_addr = cpu_to_le64(buffer_info->dma);
use_tpd->buffer_len = cpu_to_le16(buffer_info->length);
}
@ -2198,8 +2203,7 @@ void atl1c_down(struct atl1c_adapter *adapter)
struct net_device *netdev = adapter->netdev;
atl1c_del_timer(adapter);
atl1c_cancel_work(adapter);
adapter->work_event = 0; /* clear all event */
/* signal that we're down so the interrupt handler does not
* reschedule our watchdog timer */
set_bit(__AT_DOWN, &adapter->flags);
@ -2599,8 +2603,8 @@ static int __devinit atl1c_probe(struct pci_dev *pdev,
adapter->hw.mac_addr[4], adapter->hw.mac_addr[5]);
atl1c_hw_set_mac_addr(&adapter->hw);
INIT_WORK(&adapter->reset_task, atl1c_reset_task);
INIT_WORK(&adapter->link_chg_task, atl1c_link_chg_task);
INIT_WORK(&adapter->common_task, atl1c_common_task);
adapter->work_event = 0;
err = register_netdev(netdev);
if (err) {
dev_err(&pdev->dev, "register netdevice failed\n");

View file

@ -1505,8 +1505,7 @@ static int b44_magic_pattern(u8 *macaddr, u8 *ppattern, u8 *pmask, int offset)
for (k = 0; k< ethaddr_bytes; k++) {
ppattern[offset + magicsync +
(j * ETH_ALEN) + k] = macaddr[k];
len++;
set_bit(len, (unsigned long *) pmask);
set_bit(len++, (unsigned long *) pmask);
}
}
return len - 1;

View file

@ -54,6 +54,15 @@ config CAN_MCP251X
---help---
Driver for the Microchip MCP251x SPI CAN controllers.
config CAN_BFIN
depends on CAN_DEV && (BF534 || BF536 || BF537 || BF538 || BF539 || BF54x)
tristate "Analog Devices Blackfin on-chip CAN"
---help---
Driver for the Analog Devices Blackfin on-chip CAN controllers
To compile this driver as a module, choose M here: the
module will be called bfin_can.
source "drivers/net/can/mscan/Kconfig"
source "drivers/net/can/sja1000/Kconfig"

View file

@ -14,5 +14,6 @@ obj-$(CONFIG_CAN_MSCAN) += mscan/
obj-$(CONFIG_CAN_AT91) += at91_can.o
obj-$(CONFIG_CAN_TI_HECC) += ti_hecc.o
obj-$(CONFIG_CAN_MCP251X) += mcp251x.o
obj-$(CONFIG_CAN_BFIN) += bfin_can.o
ccflags-$(CONFIG_CAN_DEBUG_DEVICES) := -DDEBUG

783
drivers/net/can/bfin_can.c Normal file
View file

@ -0,0 +1,783 @@
/*
* Blackfin On-Chip CAN Driver
*
* Copyright 2004-2009 Analog Devices Inc.
*
* Enter bugs at http://blackfin.uclinux.org/
*
* Licensed under the GPL-2 or later.
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/bitops.h>
#include <linux/interrupt.h>
#include <linux/errno.h>
#include <linux/netdevice.h>
#include <linux/skbuff.h>
#include <linux/platform_device.h>
#include <linux/can.h>
#include <linux/can/dev.h>
#include <linux/can/error.h>
#include <asm/portmux.h>
#define DRV_NAME "bfin_can"
#define BFIN_CAN_TIMEOUT 100
/*
* transmit and receive channels
*/
#define TRANSMIT_CHL 24
#define RECEIVE_STD_CHL 0
#define RECEIVE_EXT_CHL 4
#define RECEIVE_RTR_CHL 8
#define RECEIVE_EXT_RTR_CHL 12
#define MAX_CHL_NUMBER 32
/*
* bfin can registers layout
*/
struct bfin_can_mask_regs {
u16 aml;
u16 dummy1;
u16 amh;
u16 dummy2;
};
struct bfin_can_channel_regs {
u16 data[8];
u16 dlc;
u16 dummy1;
u16 tsv;
u16 dummy2;
u16 id0;
u16 dummy3;
u16 id1;
u16 dummy4;
};
struct bfin_can_regs {
/*
* global control and status registers
*/
u16 mc1; /* offset 0 */
u16 dummy1;
u16 md1; /* offset 4 */
u16 rsv1[13];
u16 mbtif1; /* offset 0x20 */
u16 dummy2;
u16 mbrif1; /* offset 0x24 */
u16 dummy3;
u16 mbim1; /* offset 0x28 */
u16 rsv2[11];
u16 mc2; /* offset 0x40 */
u16 dummy4;
u16 md2; /* offset 0x44 */
u16 dummy5;
u16 trs2; /* offset 0x48 */
u16 rsv3[11];
u16 mbtif2; /* offset 0x60 */
u16 dummy6;
u16 mbrif2; /* offset 0x64 */
u16 dummy7;
u16 mbim2; /* offset 0x68 */
u16 rsv4[11];
u16 clk; /* offset 0x80 */
u16 dummy8;
u16 timing; /* offset 0x84 */
u16 rsv5[3];
u16 status; /* offset 0x8c */
u16 dummy9;
u16 cec; /* offset 0x90 */
u16 dummy10;
u16 gis; /* offset 0x94 */
u16 dummy11;
u16 gim; /* offset 0x98 */
u16 rsv6[3];
u16 ctrl; /* offset 0xa0 */
u16 dummy12;
u16 intr; /* offset 0xa4 */
u16 rsv7[7];
u16 esr; /* offset 0xb4 */
u16 rsv8[37];
/*
* channel(mailbox) mask and message registers
*/
struct bfin_can_mask_regs msk[MAX_CHL_NUMBER]; /* offset 0x100 */
struct bfin_can_channel_regs chl[MAX_CHL_NUMBER]; /* offset 0x200 */
};
/*
* bfin can private data
*/
struct bfin_can_priv {
struct can_priv can; /* must be the first member */
struct net_device *dev;
void __iomem *membase;
int rx_irq;
int tx_irq;
int err_irq;
unsigned short *pin_list;
};
/*
* bfin can timing parameters
*/
static struct can_bittiming_const bfin_can_bittiming_const = {
.name = DRV_NAME,
.tseg1_min = 1,
.tseg1_max = 16,
.tseg2_min = 1,
.tseg2_max = 8,
.sjw_max = 4,
/*
* Although the BRP field can be set to any value, it is recommended
* that the value be greater than or equal to 4, as restrictions
* apply to the bit timing configuration when BRP is less than 4.
*/
.brp_min = 4,
.brp_max = 1024,
.brp_inc = 1,
};
static int bfin_can_set_bittiming(struct net_device *dev)
{
struct bfin_can_priv *priv = netdev_priv(dev);
struct bfin_can_regs __iomem *reg = priv->membase;
struct can_bittiming *bt = &priv->can.bittiming;
u16 clk, timing;
clk = bt->brp - 1;
timing = ((bt->sjw - 1) << 8) | (bt->prop_seg + bt->phase_seg1 - 1) |
((bt->phase_seg2 - 1) << 4);
/*
* If the SAM bit is set, the input signal is oversampled three times
* at the SCLK rate.
*/
if (priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
timing |= SAM;
bfin_write16(&reg->clk, clk);
bfin_write16(&reg->timing, timing);
dev_info(dev->dev.parent, "setting CLOCK=0x%04x TIMING=0x%04x\n",
clk, timing);
return 0;
}
static void bfin_can_set_reset_mode(struct net_device *dev)
{
struct bfin_can_priv *priv = netdev_priv(dev);
struct bfin_can_regs __iomem *reg = priv->membase;
int timeout = BFIN_CAN_TIMEOUT;
int i;
/* disable interrupts */
bfin_write16(&reg->mbim1, 0);
bfin_write16(&reg->mbim2, 0);
bfin_write16(&reg->gim, 0);
/* reset can and enter configuration mode */
bfin_write16(&reg->ctrl, SRS | CCR);
SSYNC();
bfin_write16(&reg->ctrl, CCR);
SSYNC();
while (!(bfin_read16(&reg->ctrl) & CCA)) {
udelay(10);
if (--timeout == 0) {
dev_err(dev->dev.parent,
"fail to enter configuration mode\n");
BUG();
}
}
/*
* All mailbox configurations are marked as inactive
* by writing to CAN Mailbox Configuration Registers 1 and 2
* For all bits: 0 - Mailbox disabled, 1 - Mailbox enabled
*/
bfin_write16(&reg->mc1, 0);
bfin_write16(&reg->mc2, 0);
/* Set Mailbox Direction */
bfin_write16(&reg->md1, 0xFFFF); /* mailbox 1-16 are RX */
bfin_write16(&reg->md2, 0); /* mailbox 17-32 are TX */
/* RECEIVE_STD_CHL */
for (i = 0; i < 2; i++) {
bfin_write16(&reg->chl[RECEIVE_STD_CHL + i].id0, 0);
bfin_write16(&reg->chl[RECEIVE_STD_CHL + i].id1, AME);
bfin_write16(&reg->chl[RECEIVE_STD_CHL + i].dlc, 0);
bfin_write16(&reg->msk[RECEIVE_STD_CHL + i].amh, 0x1FFF);
bfin_write16(&reg->msk[RECEIVE_STD_CHL + i].aml, 0xFFFF);
}
/* RECEIVE_EXT_CHL */
for (i = 0; i < 2; i++) {
bfin_write16(&reg->chl[RECEIVE_EXT_CHL + i].id0, 0);
bfin_write16(&reg->chl[RECEIVE_EXT_CHL + i].id1, AME | IDE);
bfin_write16(&reg->chl[RECEIVE_EXT_CHL + i].dlc, 0);
bfin_write16(&reg->msk[RECEIVE_EXT_CHL + i].amh, 0x1FFF);
bfin_write16(&reg->msk[RECEIVE_EXT_CHL + i].aml, 0xFFFF);
}
bfin_write16(&reg->mc2, BIT(TRANSMIT_CHL - 16));
bfin_write16(&reg->mc1, BIT(RECEIVE_STD_CHL) + BIT(RECEIVE_EXT_CHL));
SSYNC();
priv->can.state = CAN_STATE_STOPPED;
}
static void bfin_can_set_normal_mode(struct net_device *dev)
{
struct bfin_can_priv *priv = netdev_priv(dev);
struct bfin_can_regs __iomem *reg = priv->membase;
int timeout = BFIN_CAN_TIMEOUT;
/*
* leave configuration mode
*/
bfin_write16(&reg->ctrl, bfin_read16(&reg->ctrl) & ~CCR);
while (bfin_read16(&reg->status) & CCA) {
udelay(10);
if (--timeout == 0) {
dev_err(dev->dev.parent,
"fail to leave configuration mode\n");
BUG();
}
}
/*
* clear _All_ tx and rx interrupts
*/
bfin_write16(&reg->mbtif1, 0xFFFF);
bfin_write16(&reg->mbtif2, 0xFFFF);
bfin_write16(&reg->mbrif1, 0xFFFF);
bfin_write16(&reg->mbrif2, 0xFFFF);
/*
* clear global interrupt status register
*/
bfin_write16(&reg->gis, 0x7FF); /* overwrites with '1' */
/*
* Initialize Interrupts
* - set bits in the mailbox interrupt mask register
* - global interrupt mask
*/
bfin_write16(&reg->mbim1, BIT(RECEIVE_STD_CHL) + BIT(RECEIVE_EXT_CHL));
bfin_write16(&reg->mbim2, BIT(TRANSMIT_CHL - 16));
bfin_write16(&reg->gim, EPIM | BOIM | RMLIM);
SSYNC();
}
static void bfin_can_start(struct net_device *dev)
{
struct bfin_can_priv *priv = netdev_priv(dev);
/* enter reset mode */
if (priv->can.state != CAN_STATE_STOPPED)
bfin_can_set_reset_mode(dev);
/* leave reset mode */
bfin_can_set_normal_mode(dev);
}
static int bfin_can_set_mode(struct net_device *dev, enum can_mode mode)
{
switch (mode) {
case CAN_MODE_START:
bfin_can_start(dev);
if (netif_queue_stopped(dev))
netif_wake_queue(dev);
break;
default:
return -EOPNOTSUPP;
}
return 0;
}
static int bfin_can_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct bfin_can_priv *priv = netdev_priv(dev);
struct bfin_can_regs __iomem *reg = priv->membase;
struct can_frame *cf = (struct can_frame *)skb->data;
u8 dlc = cf->can_dlc;
canid_t id = cf->can_id;
u8 *data = cf->data;
u16 val;
int i;
netif_stop_queue(dev);
/* fill id */
if (id & CAN_EFF_FLAG) {
bfin_write16(&reg->chl[TRANSMIT_CHL].id0, id);
if (id & CAN_RTR_FLAG)
writew(((id & 0x1FFF0000) >> 16) | IDE | AME | RTR,
&reg->chl[TRANSMIT_CHL].id1);
else
writew(((id & 0x1FFF0000) >> 16) | IDE | AME,
&reg->chl[TRANSMIT_CHL].id1);
} else {
if (id & CAN_RTR_FLAG)
writew((id << 2) | AME | RTR,
&reg->chl[TRANSMIT_CHL].id1);
else
bfin_write16(&reg->chl[TRANSMIT_CHL].id1,
(id << 2) | AME);
}
/* fill payload */
for (i = 0; i < 8; i += 2) {
val = ((7 - i) < dlc ? (data[7 - i]) : 0) +
((6 - i) < dlc ? (data[6 - i] << 8) : 0);
bfin_write16(&reg->chl[TRANSMIT_CHL].data[i], val);
}
/* fill data length code */
bfin_write16(&reg->chl[TRANSMIT_CHL].dlc, dlc);
dev->trans_start = jiffies;
can_put_echo_skb(skb, dev, 0);
/* set transmit request */
bfin_write16(&reg->trs2, BIT(TRANSMIT_CHL - 16));
return 0;
}
static void bfin_can_rx(struct net_device *dev, u16 isrc)
{
struct bfin_can_priv *priv = netdev_priv(dev);
struct net_device_stats *stats = &dev->stats;
struct bfin_can_regs __iomem *reg = priv->membase;
struct can_frame *cf;
struct sk_buff *skb;
int obj;
int i;
u16 val;
skb = alloc_can_skb(dev, &cf);
if (skb == NULL)
return;
/* get id */
if (isrc & BIT(RECEIVE_EXT_CHL)) {
/* extended frame format (EFF) */
cf->can_id = ((bfin_read16(&reg->chl[RECEIVE_EXT_CHL].id1)
& 0x1FFF) << 16)
+ bfin_read16(&reg->chl[RECEIVE_EXT_CHL].id0);
cf->can_id |= CAN_EFF_FLAG;
obj = RECEIVE_EXT_CHL;
} else {
/* standard frame format (SFF) */
cf->can_id = (bfin_read16(&reg->chl[RECEIVE_STD_CHL].id1)
& 0x1ffc) >> 2;
obj = RECEIVE_STD_CHL;
}
if (bfin_read16(&reg->chl[obj].id1) & RTR)
cf->can_id |= CAN_RTR_FLAG;
/* get data length code */
cf->can_dlc = bfin_read16(&reg->chl[obj].dlc);
/* get payload */
for (i = 0; i < 8; i += 2) {
val = bfin_read16(&reg->chl[obj].data[i]);
cf->data[7 - i] = (7 - i) < cf->can_dlc ? val : 0;
cf->data[6 - i] = (6 - i) < cf->can_dlc ? (val >> 8) : 0;
}
netif_rx(skb);
stats->rx_packets++;
stats->rx_bytes += cf->can_dlc;
}
static int bfin_can_err(struct net_device *dev, u16 isrc, u16 status)
{
struct bfin_can_priv *priv = netdev_priv(dev);
struct bfin_can_regs __iomem *reg = priv->membase;
struct net_device_stats *stats = &dev->stats;
struct can_frame *cf;
struct sk_buff *skb;
enum can_state state = priv->can.state;
skb = alloc_can_err_skb(dev, &cf);
if (skb == NULL)
return -ENOMEM;
if (isrc & RMLIS) {
/* data overrun interrupt */
dev_dbg(dev->dev.parent, "data overrun interrupt\n");
cf->can_id |= CAN_ERR_CRTL;
cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
stats->rx_over_errors++;
stats->rx_errors++;
}
if (isrc & BOIS) {
dev_dbg(dev->dev.parent, "bus-off mode interrupt\n");
state = CAN_STATE_BUS_OFF;
cf->can_id |= CAN_ERR_BUSOFF;
can_bus_off(dev);
}
if (isrc & EPIS) {
/* error passive interrupt */
dev_dbg(dev->dev.parent, "error passive interrupt\n");
state = CAN_STATE_ERROR_PASSIVE;
}
if ((isrc & EWTIS) || (isrc & EWRIS)) {
dev_dbg(dev->dev.parent,
"Error Warning Transmit/Receive Interrupt\n");
state = CAN_STATE_ERROR_WARNING;
}
if (state != priv->can.state && (state == CAN_STATE_ERROR_WARNING ||
state == CAN_STATE_ERROR_PASSIVE)) {
u16 cec = bfin_read16(&reg->cec);
u8 rxerr = cec;
u8 txerr = cec >> 8;
cf->can_id |= CAN_ERR_CRTL;
if (state == CAN_STATE_ERROR_WARNING) {
priv->can.can_stats.error_warning++;
cf->data[1] = (txerr > rxerr) ?
CAN_ERR_CRTL_TX_WARNING :
CAN_ERR_CRTL_RX_WARNING;
} else {
priv->can.can_stats.error_passive++;
cf->data[1] = (txerr > rxerr) ?
CAN_ERR_CRTL_TX_PASSIVE :
CAN_ERR_CRTL_RX_PASSIVE;
}
}
if (status) {
priv->can.can_stats.bus_error++;
cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
if (status & BEF)
cf->data[2] |= CAN_ERR_PROT_BIT;
else if (status & FER)
cf->data[2] |= CAN_ERR_PROT_FORM;
else if (status & SER)
cf->data[2] |= CAN_ERR_PROT_STUFF;
else
cf->data[2] |= CAN_ERR_PROT_UNSPEC;
}
priv->can.state = state;
netif_rx(skb);
stats->rx_packets++;
stats->rx_bytes += cf->can_dlc;
return 0;
}
irqreturn_t bfin_can_interrupt(int irq, void *dev_id)
{
struct net_device *dev = dev_id;
struct bfin_can_priv *priv = netdev_priv(dev);
struct bfin_can_regs __iomem *reg = priv->membase;
struct net_device_stats *stats = &dev->stats;
u16 status, isrc;
if ((irq == priv->tx_irq) && bfin_read16(&reg->mbtif2)) {
/* transmission complete interrupt */
bfin_write16(&reg->mbtif2, 0xFFFF);
stats->tx_packets++;
stats->tx_bytes += bfin_read16(&reg->chl[TRANSMIT_CHL].dlc);
can_get_echo_skb(dev, 0);
netif_wake_queue(dev);
} else if ((irq == priv->rx_irq) && bfin_read16(&reg->mbrif1)) {
/* receive interrupt */
isrc = bfin_read16(&reg->mbrif1);
bfin_write16(&reg->mbrif1, 0xFFFF);
bfin_can_rx(dev, isrc);
} else if ((irq == priv->err_irq) && bfin_read16(&reg->gis)) {
/* error interrupt */
isrc = bfin_read16(&reg->gis);
status = bfin_read16(&reg->esr);
bfin_write16(&reg->gis, 0x7FF);
bfin_can_err(dev, isrc, status);
} else {
return IRQ_NONE;
}
return IRQ_HANDLED;
}
static int bfin_can_open(struct net_device *dev)
{
struct bfin_can_priv *priv = netdev_priv(dev);
int err;
/* set chip into reset mode */
bfin_can_set_reset_mode(dev);
/* common open */
err = open_candev(dev);
if (err)
goto exit_open;
/* register interrupt handler */
err = request_irq(priv->rx_irq, &bfin_can_interrupt, 0,
"bfin-can-rx", dev);
if (err)
goto exit_rx_irq;
err = request_irq(priv->tx_irq, &bfin_can_interrupt, 0,
"bfin-can-tx", dev);
if (err)
goto exit_tx_irq;
err = request_irq(priv->err_irq, &bfin_can_interrupt, 0,
"bfin-can-err", dev);
if (err)
goto exit_err_irq;
bfin_can_start(dev);
netif_start_queue(dev);
return 0;
exit_err_irq:
free_irq(priv->tx_irq, dev);
exit_tx_irq:
free_irq(priv->rx_irq, dev);
exit_rx_irq:
close_candev(dev);
exit_open:
return err;
}
static int bfin_can_close(struct net_device *dev)
{
struct bfin_can_priv *priv = netdev_priv(dev);
netif_stop_queue(dev);
bfin_can_set_reset_mode(dev);
close_candev(dev);
free_irq(priv->rx_irq, dev);
free_irq(priv->tx_irq, dev);
free_irq(priv->err_irq, dev);
return 0;
}
struct net_device *alloc_bfin_candev(void)
{
struct net_device *dev;
struct bfin_can_priv *priv;
dev = alloc_candev(sizeof(*priv));
if (!dev)
return NULL;
priv = netdev_priv(dev);
priv->dev = dev;
priv->can.bittiming_const = &bfin_can_bittiming_const;
priv->can.do_set_bittiming = bfin_can_set_bittiming;
priv->can.do_set_mode = bfin_can_set_mode;
return dev;
}
static const struct net_device_ops bfin_can_netdev_ops = {
.ndo_open = bfin_can_open,
.ndo_stop = bfin_can_close,
.ndo_start_xmit = bfin_can_start_xmit,
};
static int __devinit bfin_can_probe(struct platform_device *pdev)
{
int err;
struct net_device *dev;
struct bfin_can_priv *priv;
struct resource *res_mem, *rx_irq, *tx_irq, *err_irq;
unsigned short *pdata;
pdata = pdev->dev.platform_data;
if (!pdata) {
dev_err(&pdev->dev, "No platform data provided!\n");
err = -EINVAL;
goto exit;
}
res_mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
rx_irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
tx_irq = platform_get_resource(pdev, IORESOURCE_IRQ, 1);
err_irq = platform_get_resource(pdev, IORESOURCE_IRQ, 2);
if (!res_mem || !rx_irq || !tx_irq || !err_irq) {
err = -EINVAL;
goto exit;
}
if (!request_mem_region(res_mem->start, resource_size(res_mem),
dev_name(&pdev->dev))) {
err = -EBUSY;
goto exit;
}
/* request peripheral pins */
err = peripheral_request_list(pdata, dev_name(&pdev->dev));
if (err)
goto exit_mem_release;
dev = alloc_bfin_candev();
if (!dev) {
err = -ENOMEM;
goto exit_peri_pin_free;
}
priv = netdev_priv(dev);
priv->membase = (void __iomem *)res_mem->start;
priv->rx_irq = rx_irq->start;
priv->tx_irq = tx_irq->start;
priv->err_irq = err_irq->start;
priv->pin_list = pdata;
priv->can.clock.freq = get_sclk();
dev_set_drvdata(&pdev->dev, dev);
SET_NETDEV_DEV(dev, &pdev->dev);
dev->flags |= IFF_ECHO; /* we support local echo */
dev->netdev_ops = &bfin_can_netdev_ops;
bfin_can_set_reset_mode(dev);
err = register_candev(dev);
if (err) {
dev_err(&pdev->dev, "registering failed (err=%d)\n", err);
goto exit_candev_free;
}
dev_info(&pdev->dev,
"%s device registered"
"(&reg_base=%p, rx_irq=%d, tx_irq=%d, err_irq=%d, sclk=%d)\n",
DRV_NAME, (void *)priv->membase, priv->rx_irq,
priv->tx_irq, priv->err_irq, priv->can.clock.freq);
return 0;
exit_candev_free:
free_candev(dev);
exit_peri_pin_free:
peripheral_free_list(pdata);
exit_mem_release:
release_mem_region(res_mem->start, resource_size(res_mem));
exit:
return err;
}
static int __devexit bfin_can_remove(struct platform_device *pdev)
{
struct net_device *dev = dev_get_drvdata(&pdev->dev);
struct bfin_can_priv *priv = netdev_priv(dev);
struct resource *res;
bfin_can_set_reset_mode(dev);
unregister_candev(dev);
dev_set_drvdata(&pdev->dev, NULL);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
release_mem_region(res->start, resource_size(res));
peripheral_free_list(priv->pin_list);
free_candev(dev);
return 0;
}
#ifdef CONFIG_PM
static int bfin_can_suspend(struct platform_device *pdev, pm_message_t mesg)
{
struct net_device *dev = dev_get_drvdata(&pdev->dev);
struct bfin_can_priv *priv = netdev_priv(dev);
struct bfin_can_regs __iomem *reg = priv->membase;
int timeout = BFIN_CAN_TIMEOUT;
if (netif_running(dev)) {
/* enter sleep mode */
bfin_write16(&reg->ctrl, bfin_read16(&reg->ctrl) | SMR);
SSYNC();
while (!(bfin_read16(&reg->intr) & SMACK)) {
udelay(10);
if (--timeout == 0) {
dev_err(dev->dev.parent,
"fail to enter sleep mode\n");
BUG();
}
}
}
return 0;
}
static int bfin_can_resume(struct platform_device *pdev)
{
struct net_device *dev = dev_get_drvdata(&pdev->dev);
struct bfin_can_priv *priv = netdev_priv(dev);
struct bfin_can_regs __iomem *reg = priv->membase;
if (netif_running(dev)) {
/* leave sleep mode */
bfin_write16(&reg->intr, 0);
SSYNC();
}
return 0;
}
#else
#define bfin_can_suspend NULL
#define bfin_can_resume NULL
#endif /* CONFIG_PM */
static struct platform_driver bfin_can_driver = {
.probe = bfin_can_probe,
.remove = __devexit_p(bfin_can_remove),
.suspend = bfin_can_suspend,
.resume = bfin_can_resume,
.driver = {
.name = DRV_NAME,
.owner = THIS_MODULE,
},
};
static int __init bfin_can_init(void)
{
return platform_driver_register(&bfin_can_driver);
}
module_init(bfin_can_init);
static void __exit bfin_can_exit(void)
{
platform_driver_unregister(&bfin_can_driver);
}
module_exit(bfin_can_exit);
MODULE_AUTHOR("Barry Song <21cnbao@gmail.com>");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Blackfin on-chip CAN netdevice driver");

View file

@ -1104,6 +1104,8 @@ static int cnic_alloc_bnx2x_resc(struct cnic_dev *dev)
cp->bnx2x_status_blk = cp->status_blk;
cp->bnx2x_def_status_blk = cp->ethdev->irq_arr[1].status_blk;
memset(cp->bnx2x_status_blk, 0, sizeof(struct host_status_block));
cp->l2_rx_ring_size = 15;
ret = cnic_alloc_l2_rings(dev, 4);
@ -4183,6 +4185,12 @@ static void cnic_shutdown_rings(struct cnic_dev *dev)
cnic_submit_kwqe_16(dev, RAMROD_CMD_ID_ETH_HALT,
BNX2X_ISCSI_L2_CID, ETH_CONNECTION_TYPE, &l5_data);
msleep(10);
memset(&l5_data, 0, sizeof(l5_data));
cnic_submit_kwqe_16(dev, RAMROD_CMD_ID_ETH_CFC_DEL,
BNX2X_ISCSI_L2_CID, ETH_CONNECTION_TYPE |
(1 << SPE_HDR_COMMON_RAMROD_SHIFT), &l5_data);
msleep(10);
}
}
@ -4289,6 +4297,9 @@ static void cnic_stop_bnx2x_hw(struct cnic_dev *dev)
offsetof(struct cstorm_status_block_c,
index_values[HC_INDEX_C_ISCSI_EQ_CONS]),
0);
CNIC_WR(dev, BAR_CSTRORM_INTMEM +
CSTORM_ISCSI_EQ_CONS_OFFSET(cp->func, 0), 0);
CNIC_WR16(dev, cp->kcq_io_addr, 0);
cnic_free_resc(dev);
}

View file

@ -2860,6 +2860,7 @@ static int t3_reenable_adapter(struct adapter *adapter)
}
pci_set_master(adapter->pdev);
pci_restore_state(adapter->pdev);
pci_save_state(adapter->pdev);
/* Free sge resources */
t3_free_sge_resources(adapter);

View file

@ -74,7 +74,7 @@
#define E1000_WUS_BC E1000_WUFC_BC
/* Extended Device Control */
#define E1000_CTRL_EXT_SDP7_DATA 0x00000080 /* Value of SW Definable Pin 7 */
#define E1000_CTRL_EXT_SDP3_DATA 0x00000080 /* Value of SW Definable Pin 3 */
#define E1000_CTRL_EXT_EE_RST 0x00002000 /* Reinitialize from EEPROM */
#define E1000_CTRL_EXT_SPD_BYPS 0x00008000 /* Speed Select Bypass */
#define E1000_CTRL_EXT_RO_DIS 0x00020000 /* Relaxed Ordering disable */

View file

@ -46,6 +46,9 @@
#define E1000_KMRNCTRLSTA_HD_CTRL_1000_DEFAULT 0x0000
#define E1000_KMRNCTRLSTA_OPMODE_E_IDLE 0x2000
#define E1000_KMRNCTRLSTA_OPMODE_MASK 0x000C
#define E1000_KMRNCTRLSTA_OPMODE_INBAND_MDIO 0x0004
#define E1000_TCTL_EXT_GCEX_MASK 0x000FFC00 /* Gigabit Carry Extend Padding */
#define DEFAULT_TCTL_EXT_GCEX_80003ES2LAN 0x00010000
@ -462,28 +465,36 @@ static s32 e1000_read_phy_reg_gg82563_80003es2lan(struct e1000_hw *hw,
return ret_val;
}
/*
* The "ready" bit in the MDIC register may be incorrectly set
* before the device has completed the "Page Select" MDI
* transaction. So we wait 200us after each MDI command...
*/
udelay(200);
if (hw->dev_spec.e80003es2lan.mdic_wa_enable == true) {
/*
* The "ready" bit in the MDIC register may be incorrectly set
* before the device has completed the "Page Select" MDI
* transaction. So we wait 200us after each MDI command...
*/
udelay(200);
/* ...and verify the command was successful. */
ret_val = e1000e_read_phy_reg_mdic(hw, page_select, &temp);
/* ...and verify the command was successful. */
ret_val = e1000e_read_phy_reg_mdic(hw, page_select, &temp);
if (((u16)offset >> GG82563_PAGE_SHIFT) != temp) {
ret_val = -E1000_ERR_PHY;
e1000_release_phy_80003es2lan(hw);
return ret_val;
if (((u16)offset >> GG82563_PAGE_SHIFT) != temp) {
ret_val = -E1000_ERR_PHY;
e1000_release_phy_80003es2lan(hw);
return ret_val;
}
udelay(200);
ret_val = e1000e_read_phy_reg_mdic(hw,
MAX_PHY_REG_ADDRESS & offset,
data);
udelay(200);
} else {
ret_val = e1000e_read_phy_reg_mdic(hw,
MAX_PHY_REG_ADDRESS & offset,
data);
}
udelay(200);
ret_val = e1000e_read_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
data);
udelay(200);
e1000_release_phy_80003es2lan(hw);
return ret_val;
@ -526,28 +537,35 @@ static s32 e1000_write_phy_reg_gg82563_80003es2lan(struct e1000_hw *hw,
return ret_val;
}
if (hw->dev_spec.e80003es2lan.mdic_wa_enable == true) {
/*
* The "ready" bit in the MDIC register may be incorrectly set
* before the device has completed the "Page Select" MDI
* transaction. So we wait 200us after each MDI command...
*/
udelay(200);
/*
* The "ready" bit in the MDIC register may be incorrectly set
* before the device has completed the "Page Select" MDI
* transaction. So we wait 200us after each MDI command...
*/
udelay(200);
/* ...and verify the command was successful. */
ret_val = e1000e_read_phy_reg_mdic(hw, page_select, &temp);
/* ...and verify the command was successful. */
ret_val = e1000e_read_phy_reg_mdic(hw, page_select, &temp);
if (((u16)offset >> GG82563_PAGE_SHIFT) != temp) {
e1000_release_phy_80003es2lan(hw);
return -E1000_ERR_PHY;
}
if (((u16)offset >> GG82563_PAGE_SHIFT) != temp) {
e1000_release_phy_80003es2lan(hw);
return -E1000_ERR_PHY;
udelay(200);
ret_val = e1000e_write_phy_reg_mdic(hw,
MAX_PHY_REG_ADDRESS & offset,
data);
udelay(200);
} else {
ret_val = e1000e_write_phy_reg_mdic(hw,
MAX_PHY_REG_ADDRESS & offset,
data);
}
udelay(200);
ret_val = e1000e_write_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
data);
udelay(200);
e1000_release_phy_80003es2lan(hw);
return ret_val;
@ -866,6 +884,19 @@ static s32 e1000_init_hw_80003es2lan(struct e1000_hw *hw)
reg_data &= ~0x00100000;
E1000_WRITE_REG_ARRAY(hw, E1000_FFLT, 0x0001, reg_data);
/* default to true to enable the MDIC W/A */
hw->dev_spec.e80003es2lan.mdic_wa_enable = true;
ret_val = e1000_read_kmrn_reg_80003es2lan(hw,
E1000_KMRNCTRLSTA_OFFSET >>
E1000_KMRNCTRLSTA_OFFSET_SHIFT,
&i);
if (!ret_val) {
if ((i & E1000_KMRNCTRLSTA_OPMODE_MASK) ==
E1000_KMRNCTRLSTA_OPMODE_INBAND_MDIO)
hw->dev_spec.e80003es2lan.mdic_wa_enable = false;
}
/*
* Clear all of the statistics registers (clear on read). It is
* important that we do this after we have tried to establish link

View file

@ -302,6 +302,8 @@ enum e1e_registers {
#define E1000_KMRNCTRLSTA_OFFSET_SHIFT 16
#define E1000_KMRNCTRLSTA_REN 0x00200000
#define E1000_KMRNCTRLSTA_DIAG_OFFSET 0x3 /* Kumeran Diagnostic */
#define E1000_KMRNCTRLSTA_TIMEOUTS 0x4 /* Kumeran Timeouts */
#define E1000_KMRNCTRLSTA_INBAND_PARAM 0x9 /* Kumeran InBand Parameters */
#define E1000_KMRNCTRLSTA_DIAG_NELPBK 0x1000 /* Nearend Loopback mode */
#define E1000_KMRNCTRLSTA_K1_CONFIG 0x7
#define E1000_KMRNCTRLSTA_K1_ENABLE 0x140E
@ -898,6 +900,10 @@ struct e1000_dev_spec_82571 {
u32 smb_counter;
};
struct e1000_dev_spec_80003es2lan {
bool mdic_wa_enable;
};
struct e1000_shadow_ram {
u16 value;
bool modified;
@ -926,6 +932,7 @@ struct e1000_hw {
union {
struct e1000_dev_spec_82571 e82571;
struct e1000_dev_spec_80003es2lan e80003es2lan;
struct e1000_dev_spec_ich8lan ich8lan;
} dev_spec;
};

View file

@ -2755,14 +2755,16 @@ static s32 e1000_setup_copper_link_ich8lan(struct e1000_hw *hw)
* and increase the max iterations when polling the phy;
* this fixes erroneous timeouts at 10Mbps.
*/
ret_val = e1000e_write_kmrn_reg(hw, GG82563_REG(0x34, 4), 0xFFFF);
ret_val = e1000e_write_kmrn_reg(hw, E1000_KMRNCTRLSTA_TIMEOUTS, 0xFFFF);
if (ret_val)
return ret_val;
ret_val = e1000e_read_kmrn_reg(hw, GG82563_REG(0x34, 9), &reg_data);
ret_val = e1000e_read_kmrn_reg(hw, E1000_KMRNCTRLSTA_INBAND_PARAM,
&reg_data);
if (ret_val)
return ret_val;
reg_data |= 0x3F;
ret_val = e1000e_write_kmrn_reg(hw, GG82563_REG(0x34, 9), reg_data);
ret_val = e1000e_write_kmrn_reg(hw, E1000_KMRNCTRLSTA_INBAND_PARAM,
reg_data);
if (ret_val)
return ret_val;

View file

@ -4541,7 +4541,7 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake)
e1000_media_type_internal_serdes) {
/* keep the laser running in D3 */
ctrl_ext = er32(CTRL_EXT);
ctrl_ext |= E1000_CTRL_EXT_SDP7_DATA;
ctrl_ext |= E1000_CTRL_EXT_SDP3_DATA;
ew32(CTRL_EXT, ctrl_ext);
}

View file

@ -85,11 +85,15 @@ MODULE_PARM_DESC(debug, "debugging messages level");
static void mpc52xx_fec_tx_timeout(struct net_device *dev)
{
struct mpc52xx_fec_priv *priv = netdev_priv(dev);
unsigned long flags;
dev_warn(&dev->dev, "transmit timed out\n");
spin_lock_irqsave(&priv->lock, flags);
mpc52xx_fec_reset(dev);
dev->stats.tx_errors++;
spin_unlock_irqrestore(&priv->lock, flags);
netif_wake_queue(dev);
}
@ -135,28 +139,32 @@ static void mpc52xx_fec_free_rx_buffers(struct net_device *dev, struct bcom_task
}
}
static void
mpc52xx_fec_rx_submit(struct net_device *dev, struct sk_buff *rskb)
{
struct mpc52xx_fec_priv *priv = netdev_priv(dev);
struct bcom_fec_bd *bd;
bd = (struct bcom_fec_bd *) bcom_prepare_next_buffer(priv->rx_dmatsk);
bd->status = FEC_RX_BUFFER_SIZE;
bd->skb_pa = dma_map_single(dev->dev.parent, rskb->data,
FEC_RX_BUFFER_SIZE, DMA_FROM_DEVICE);
bcom_submit_next_buffer(priv->rx_dmatsk, rskb);
}
static int mpc52xx_fec_alloc_rx_buffers(struct net_device *dev, struct bcom_task *rxtsk)
{
while (!bcom_queue_full(rxtsk)) {
struct sk_buff *skb;
struct bcom_fec_bd *bd;
struct sk_buff *skb;
while (!bcom_queue_full(rxtsk)) {
skb = dev_alloc_skb(FEC_RX_BUFFER_SIZE);
if (skb == NULL)
if (!skb)
return -EAGAIN;
/* zero out the initial receive buffers to aid debugging */
memset(skb->data, 0, FEC_RX_BUFFER_SIZE);
bd = (struct bcom_fec_bd *)bcom_prepare_next_buffer(rxtsk);
bd->status = FEC_RX_BUFFER_SIZE;
bd->skb_pa = dma_map_single(dev->dev.parent, skb->data,
FEC_RX_BUFFER_SIZE, DMA_FROM_DEVICE);
bcom_submit_next_buffer(rxtsk, skb);
mpc52xx_fec_rx_submit(dev, skb);
}
return 0;
}
@ -328,13 +336,12 @@ static int mpc52xx_fec_start_xmit(struct sk_buff *skb, struct net_device *dev)
DMA_TO_DEVICE);
bcom_submit_next_buffer(priv->tx_dmatsk, skb);
spin_unlock_irqrestore(&priv->lock, flags);
if (bcom_queue_full(priv->tx_dmatsk)) {
netif_stop_queue(dev);
}
spin_unlock_irqrestore(&priv->lock, flags);
return NETDEV_TX_OK;
}
@ -359,9 +366,9 @@ static irqreturn_t mpc52xx_fec_tx_interrupt(int irq, void *dev_id)
{
struct net_device *dev = dev_id;
struct mpc52xx_fec_priv *priv = netdev_priv(dev);
unsigned long flags;
spin_lock(&priv->lock);
spin_lock_irqsave(&priv->lock, flags);
while (bcom_buffer_done(priv->tx_dmatsk)) {
struct sk_buff *skb;
struct bcom_fec_bd *bd;
@ -372,11 +379,10 @@ static irqreturn_t mpc52xx_fec_tx_interrupt(int irq, void *dev_id)
dev_kfree_skb_irq(skb);
}
spin_unlock_irqrestore(&priv->lock, flags);
netif_wake_queue(dev);
spin_unlock(&priv->lock);
return IRQ_HANDLED;
}
@ -384,67 +390,60 @@ static irqreturn_t mpc52xx_fec_rx_interrupt(int irq, void *dev_id)
{
struct net_device *dev = dev_id;
struct mpc52xx_fec_priv *priv = netdev_priv(dev);
struct sk_buff *rskb; /* received sk_buff */
struct sk_buff *skb; /* new sk_buff to enqueue in its place */
struct bcom_fec_bd *bd;
u32 status, physaddr;
int length;
unsigned long flags;
spin_lock_irqsave(&priv->lock, flags);
while (bcom_buffer_done(priv->rx_dmatsk)) {
struct sk_buff *skb;
struct sk_buff *rskb;
struct bcom_fec_bd *bd;
u32 status;
rskb = bcom_retrieve_buffer(priv->rx_dmatsk, &status,
(struct bcom_bd **)&bd);
dma_unmap_single(dev->dev.parent, bd->skb_pa, rskb->len,
DMA_FROM_DEVICE);
(struct bcom_bd **)&bd);
physaddr = bd->skb_pa;
/* Test for errors in received frame */
if (status & BCOM_FEC_RX_BD_ERRORS) {
/* Drop packet and reuse the buffer */
bd = (struct bcom_fec_bd *)
bcom_prepare_next_buffer(priv->rx_dmatsk);
bd->status = FEC_RX_BUFFER_SIZE;
bd->skb_pa = dma_map_single(dev->dev.parent,
rskb->data,
FEC_RX_BUFFER_SIZE, DMA_FROM_DEVICE);
bcom_submit_next_buffer(priv->rx_dmatsk, rskb);
mpc52xx_fec_rx_submit(dev, rskb);
dev->stats.rx_dropped++;
continue;
}
/* skbs are allocated on open, so now we allocate a new one,
* and remove the old (with the packet) */
skb = dev_alloc_skb(FEC_RX_BUFFER_SIZE);
if (skb) {
/* Process the received skb */
int length = status & BCOM_FEC_RX_BD_LEN_MASK;
skb_put(rskb, length - 4); /* length without CRC32 */
rskb->dev = dev;
rskb->protocol = eth_type_trans(rskb, dev);
netif_rx(rskb);
} else {
if (!skb) {
/* Can't get a new one : reuse the same & drop pkt */
dev_notice(&dev->dev, "Memory squeeze, dropping packet.\n");
dev_notice(&dev->dev, "Low memory - dropped packet.\n");
mpc52xx_fec_rx_submit(dev, rskb);
dev->stats.rx_dropped++;
skb = rskb;
continue;
}
bd = (struct bcom_fec_bd *)
bcom_prepare_next_buffer(priv->rx_dmatsk);
/* Enqueue the new sk_buff back on the hardware */
mpc52xx_fec_rx_submit(dev, skb);
bd->status = FEC_RX_BUFFER_SIZE;
bd->skb_pa = dma_map_single(dev->dev.parent, skb->data,
FEC_RX_BUFFER_SIZE, DMA_FROM_DEVICE);
/* Process the received skb - Drop the spin lock while
* calling into the network stack */
spin_unlock_irqrestore(&priv->lock, flags);
bcom_submit_next_buffer(priv->rx_dmatsk, skb);
dma_unmap_single(dev->dev.parent, physaddr, rskb->len,
DMA_FROM_DEVICE);
length = status & BCOM_FEC_RX_BD_LEN_MASK;
skb_put(rskb, length - 4); /* length without CRC32 */
rskb->dev = dev;
rskb->protocol = eth_type_trans(rskb, dev);
netif_rx(rskb);
spin_lock_irqsave(&priv->lock, flags);
}
spin_unlock_irqrestore(&priv->lock, flags);
return IRQ_HANDLED;
}
@ -454,6 +453,7 @@ static irqreturn_t mpc52xx_fec_interrupt(int irq, void *dev_id)
struct mpc52xx_fec_priv *priv = netdev_priv(dev);
struct mpc52xx_fec __iomem *fec = priv->fec;
u32 ievent;
unsigned long flags;
ievent = in_be32(&fec->ievent);
@ -471,9 +471,10 @@ static irqreturn_t mpc52xx_fec_interrupt(int irq, void *dev_id)
if (net_ratelimit() && (ievent & FEC_IEVENT_XFIFO_ERROR))
dev_warn(&dev->dev, "FEC_IEVENT_XFIFO_ERROR\n");
spin_lock_irqsave(&priv->lock, flags);
mpc52xx_fec_reset(dev);
spin_unlock_irqrestore(&priv->lock, flags);
netif_wake_queue(dev);
return IRQ_HANDLED;
}
@ -768,6 +769,8 @@ static void mpc52xx_fec_reset(struct net_device *dev)
bcom_enable(priv->tx_dmatsk);
mpc52xx_fec_start(dev);
netif_wake_queue(dev);
}

View file

@ -2644,6 +2644,7 @@ static void gfar_netpoll(struct net_device *dev)
gfar_interrupt(priv->gfargrp[i].interruptTransmit,
&priv->gfargrp[i]);
enable_irq(priv->gfargrp[i].interruptTransmit);
}
}
}
#endif

View file

@ -342,6 +342,7 @@ static enum ixgbe_media_type ixgbe_get_media_type_82599(struct ixgbe_hw *hw)
case IXGBE_DEV_ID_82599_KX4:
case IXGBE_DEV_ID_82599_KX4_MEZZ:
case IXGBE_DEV_ID_82599_COMBO_BACKPLANE:
case IXGBE_DEV_ID_82599_KR:
case IXGBE_DEV_ID_82599_XAUI_LOM:
/* Default device ID is mezzanine card KX/KX4 */
media_type = ixgbe_media_type_backplane;

View file

@ -990,6 +990,7 @@ static void ixgbe_get_ethtool_stats(struct net_device *netdev,
char *p = NULL;
ixgbe_update_stats(adapter);
dev_get_stats(netdev);
for (i = 0; i < IXGBE_GLOBAL_STATS_LEN; i++) {
switch (ixgbe_gstrings_stats[i].type) {
case NETDEV_STATS:

View file

@ -96,6 +96,8 @@ static struct pci_device_id ixgbe_pci_tbl[] = {
board_82599 },
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_82599_XAUI_LOM),
board_82599 },
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_82599_KR),
board_82599 },
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_82599_SFP),
board_82599 },
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_82599_SFP_EM),
@ -435,8 +437,6 @@ static bool ixgbe_clean_tx_irq(struct ixgbe_q_vector *q_vector,
tx_ring->total_packets += total_packets;
tx_ring->stats.packets += total_packets;
tx_ring->stats.bytes += total_bytes;
netdev->stats.tx_bytes += total_bytes;
netdev->stats.tx_packets += total_packets;
return (count < tx_ring->work_limit);
}
@ -5327,6 +5327,7 @@ static netdev_tx_t ixgbe_xmit_frame(struct sk_buff *skb,
{
struct ixgbe_adapter *adapter = netdev_priv(netdev);
struct ixgbe_ring *tx_ring;
struct netdev_queue *txq;
unsigned int first;
unsigned int tx_flags = 0;
u8 hdr_len = 0;
@ -5424,6 +5425,9 @@ static netdev_tx_t ixgbe_xmit_frame(struct sk_buff *skb,
tx_ring->atr_count = 0;
}
}
txq = netdev_get_tx_queue(netdev, tx_ring->queue_index);
txq->tx_bytes += skb->len;
txq->tx_packets++;
ixgbe_tx_queue(adapter, tx_ring, tx_flags, count, skb->len,
hdr_len);
ixgbe_maybe_stop_tx(netdev, tx_ring, DESC_NEEDED);
@ -5437,19 +5441,6 @@ static netdev_tx_t ixgbe_xmit_frame(struct sk_buff *skb,
return NETDEV_TX_OK;
}
/**
* ixgbe_get_stats - Get System Network Statistics
* @netdev: network interface device structure
*
* Returns the address of the device statistics structure.
* The statistics are actually updated from the timer callback.
**/
static struct net_device_stats *ixgbe_get_stats(struct net_device *netdev)
{
/* only return the current stats */
return &netdev->stats;
}
/**
* ixgbe_set_mac - Change the Ethernet Address of the NIC
* @netdev: network interface device structure
@ -5580,7 +5571,6 @@ static const struct net_device_ops ixgbe_netdev_ops = {
.ndo_stop = ixgbe_close,
.ndo_start_xmit = ixgbe_xmit_frame,
.ndo_select_queue = ixgbe_select_queue,
.ndo_get_stats = ixgbe_get_stats,
.ndo_set_rx_mode = ixgbe_set_rx_mode,
.ndo_set_multicast_list = ixgbe_set_rx_mode,
.ndo_validate_addr = eth_validate_addr,

View file

@ -50,6 +50,7 @@
#define IXGBE_DEV_ID_82598EB_XF_LR 0x10F4
#define IXGBE_DEV_ID_82599_KX4 0x10F7
#define IXGBE_DEV_ID_82599_KX4_MEZZ 0x1514
#define IXGBE_DEV_ID_82599_KR 0x1517
#define IXGBE_DEV_ID_82599_CX4 0x10F9
#define IXGBE_DEV_ID_82599_SFP 0x10FB
#define IXGBE_DEV_ID_82599_SFP_EM 0x1507

View file

@ -1827,6 +1827,9 @@ static int mv643xx_eth_set_mac_address(struct net_device *dev, void *addr)
{
struct sockaddr *sa = addr;
if (!is_valid_ether_addr(sa->sa_data))
return -EINVAL;
memcpy(dev->dev_addr, sa->sa_data, ETH_ALEN);
netif_addr_lock_bh(dev);

View file

@ -75,7 +75,7 @@
#include "myri10ge_mcp.h"
#include "myri10ge_mcp_gen_header.h"
#define MYRI10GE_VERSION_STR "1.5.1-1.451"
#define MYRI10GE_VERSION_STR "1.5.1-1.453"
MODULE_DESCRIPTION("Myricom 10G driver (10GbE)");
MODULE_AUTHOR("Maintainer: help@myri.com");
@ -347,7 +347,7 @@ static int myri10ge_max_slices = 1;
module_param(myri10ge_max_slices, int, S_IRUGO);
MODULE_PARM_DESC(myri10ge_max_slices, "Max tx/rx queues");
static int myri10ge_rss_hash = MXGEFW_RSS_HASH_TYPE_SRC_PORT;
static int myri10ge_rss_hash = MXGEFW_RSS_HASH_TYPE_SRC_DST_PORT;
module_param(myri10ge_rss_hash, int, S_IRUGO);
MODULE_PARM_DESC(myri10ge_rss_hash, "Type of RSS hashing to do");

View file

@ -619,17 +619,20 @@ nx_set_product_offs(struct netxen_adapter *adapter)
uint32_t i;
__le32 entries;
int mn_present = (NX_IS_REVISION_P2(adapter->ahw.revision_id)) ?
1 : netxen_p3_has_mn(adapter);
ptab_descr = nx_get_table_desc(unirom, NX_UNI_DIR_SECT_PRODUCT_TBL);
if (ptab_descr == NULL)
return -1;
entries = cpu_to_le32(ptab_descr->num_entries);
nomn:
for (i = 0; i < entries; i++) {
__le32 flags, file_chiprev, offs;
u8 chiprev = adapter->ahw.revision_id;
int mn_present = netxen_p3_has_mn(adapter);
uint32_t flagbit;
offs = cpu_to_le32(ptab_descr->findex) +
@ -647,6 +650,11 @@ nx_set_product_offs(struct netxen_adapter *adapter)
}
}
if (mn_present && NX_IS_REVISION_P3(adapter->ahw.revision_id)) {
mn_present = 0;
goto nomn;
}
return -1;
}
@ -1021,6 +1029,10 @@ netxen_p3_has_mn(struct netxen_adapter *adapter)
u32 capability, flashed_ver;
capability = 0;
/* NX2031 always had MN */
if (NX_IS_REVISION_P2(adapter->ahw.revision_id))
return 1;
netxen_rom_fast_read(adapter,
NX_FW_VERSION_OFFSET, (int *)&flashed_ver);
flashed_ver = NETXEN_DECODE_VERSION(flashed_ver);

View file

@ -946,8 +946,9 @@ netxen_nic_init_coalesce_defaults(struct netxen_adapter *adapter)
NETXEN_DEFAULT_INTR_COALESCE_TX_PACKETS;
}
/* with rtnl_lock */
static int
netxen_nic_up(struct netxen_adapter *adapter, struct net_device *netdev)
__netxen_nic_up(struct netxen_adapter *adapter, struct net_device *netdev)
{
int err;
@ -988,14 +989,32 @@ netxen_nic_up(struct netxen_adapter *adapter, struct net_device *netdev)
return 0;
}
/* Usage: During resume and firmware recovery module.*/
static inline int
netxen_nic_up(struct netxen_adapter *adapter, struct net_device *netdev)
{
int err = 0;
rtnl_lock();
if (netif_running(netdev))
err = __netxen_nic_up(adapter, netdev);
rtnl_unlock();
return err;
}
/* with rtnl_lock */
static void
netxen_nic_down(struct netxen_adapter *adapter, struct net_device *netdev)
__netxen_nic_down(struct netxen_adapter *adapter, struct net_device *netdev)
{
if (adapter->is_up != NETXEN_ADAPTER_UP_MAGIC)
return;
clear_bit(__NX_DEV_UP, &adapter->state);
if (!test_and_clear_bit(__NX_DEV_UP, &adapter->state))
return;
smp_mb();
spin_lock(&adapter->tx_clean_lock);
netif_carrier_off(netdev);
netif_tx_disable(netdev);
@ -1014,6 +1033,17 @@ netxen_nic_down(struct netxen_adapter *adapter, struct net_device *netdev)
spin_unlock(&adapter->tx_clean_lock);
}
/* Usage: During suspend and firmware recovery module */
static inline void
netxen_nic_down(struct netxen_adapter *adapter, struct net_device *netdev)
{
rtnl_lock();
if (netif_running(netdev))
__netxen_nic_down(adapter, netdev);
rtnl_unlock();
}
static int
netxen_nic_attach(struct netxen_adapter *adapter)
@ -1122,14 +1152,14 @@ netxen_nic_reset_context(struct netxen_adapter *adapter)
netif_device_detach(netdev);
if (netif_running(netdev))
netxen_nic_down(adapter, netdev);
__netxen_nic_down(adapter, netdev);
netxen_nic_detach(adapter);
if (netif_running(netdev)) {
err = netxen_nic_attach(adapter);
if (!err)
err = netxen_nic_up(adapter, netdev);
err = __netxen_nic_up(adapter, netdev);
if (err)
goto done;
@ -1499,7 +1529,7 @@ static int netxen_nic_open(struct net_device *netdev)
if (err)
return err;
err = netxen_nic_up(adapter, netdev);
err = __netxen_nic_up(adapter, netdev);
if (err)
goto err_out;
@ -1519,7 +1549,7 @@ static int netxen_nic_close(struct net_device *netdev)
{
struct netxen_adapter *adapter = netdev_priv(netdev);
netxen_nic_down(adapter, netdev);
__netxen_nic_down(adapter, netdev);
return 0;
}
@ -2025,7 +2055,7 @@ static int netxen_nic_poll(struct napi_struct *napi, int budget)
if ((work_done < budget) && tx_complete) {
napi_complete(&sds_ring->napi);
if (netif_running(adapter->netdev))
if (test_bit(__NX_DEV_UP, &adapter->state))
netxen_nic_enable_int(sds_ring);
}
@ -2210,8 +2240,7 @@ netxen_detach_work(struct work_struct *work)
netif_device_detach(netdev);
if (netif_running(netdev))
netxen_nic_down(adapter, netdev);
netxen_nic_down(adapter, netdev);
netxen_nic_detach(adapter);

View file

@ -2152,7 +2152,9 @@ static void sky2_qlink_intr(struct sky2_hw *hw)
/* reset PHY Link Detect */
phy = sky2_pci_read16(hw, PSM_CONFIG_REG4);
sky2_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_ON);
sky2_pci_write16(hw, PSM_CONFIG_REG4, phy | 1);
sky2_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_OFF);
sky2_link_up(sky2);
}
@ -2968,8 +2970,13 @@ static int __devinit sky2_init(struct sky2_hw *hw)
break;
case CHIP_ID_YUKON_UL_2:
hw->flags = SKY2_HW_GIGABIT
| SKY2_HW_ADV_POWER_CTL;
break;
case CHIP_ID_YUKON_OPT:
hw->flags = SKY2_HW_GIGABIT
| SKY2_HW_NEW_LE
| SKY2_HW_ADV_POWER_CTL;
break;
@ -3077,6 +3084,7 @@ static void sky2_reset(struct sky2_hw *hw)
reg <<= PSM_CONFIG_REG4_TIMER_PHY_LINK_DETECT_BASE;
/* reset PHY Link Detect */
sky2_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_ON);
sky2_pci_write16(hw, PSM_CONFIG_REG4,
reg | PSM_CONFIG_REG4_RST_PHY_LINK_DETECT);
sky2_pci_write16(hw, PSM_CONFIG_REG4, reg);
@ -3094,6 +3102,7 @@ static void sky2_reset(struct sky2_hw *hw)
/* restore the PCIe Link Control register */
sky2_pci_write16(hw, cap + PCI_EXP_LNKCTL, reg);
}
sky2_write8(hw, B2_TST_CTRL1, TST_CFG_WRITE_OFF);
/* re-enable PEX PM in PEX PHY debug reg. 8 (clear bit 12) */
sky2_write32(hw, Y2_PEX_PHY_DATA, PEX_DB_ACCESS | (0x08UL << 16));

View file

@ -534,9 +534,9 @@ static inline void smc_rcv(struct net_device *dev)
#define smc_special_lock(lock, flags) spin_lock_irqsave(lock, flags)
#define smc_special_unlock(lock, flags) spin_unlock_irqrestore(lock, flags)
#else
#define smc_special_trylock(lock, flags) (1)
#define smc_special_lock(lock, flags) do { } while (0)
#define smc_special_unlock(lock, flags) do { } while (0)
#define smc_special_trylock(lock, flags) (flags == flags)
#define smc_special_lock(lock, flags) do { flags = 0; } while (0)
#define smc_special_unlock(lock, flags) do { flags = 0; } while (0)
#endif
/*
@ -2387,7 +2387,7 @@ static int smc_drv_resume(struct device *dev)
if (ndev) {
struct smc_local *lp = netdev_priv(ndev);
smc_enable_device(dev);
smc_enable_device(pdev);
if (netif_running(ndev)) {
smc_reset(ndev);
smc_enable(ndev);

View file

@ -97,6 +97,7 @@ ath5k_eeprom_init_header(struct ath5k_hw *ah)
struct ath5k_eeprom_info *ee = &ah->ah_capabilities.cap_eeprom;
int ret;
u16 val;
u32 cksum, offset;
/*
* Read values from EEPROM and store them in the capability structure
@ -111,7 +112,6 @@ ath5k_eeprom_init_header(struct ath5k_hw *ah)
if (ah->ah_ee_version < AR5K_EEPROM_VERSION_3_0)
return 0;
#ifdef notyet
/*
* Validate the checksum of the EEPROM date. There are some
* devices with invalid EEPROMs.
@ -124,7 +124,6 @@ ath5k_eeprom_init_header(struct ath5k_hw *ah)
ATH5K_ERR(ah->ah_sc, "Invalid EEPROM checksum 0x%04x\n", cksum);
return -EIO;
}
#endif
AR5K_EEPROM_READ_HDR(AR5K_EEPROM_ANT_GAIN(ah->ah_ee_version),
ee_ant_gain);

View file

@ -79,6 +79,8 @@ static const struct pci_device_id ath5k_led_devices[] = {
{ ATH_SDEVICE(PCI_VENDOR_ID_HP, 0x0137b), ATH_LED(3, 1) },
/* IBM-specific AR5212 (all others) */
{ PCI_VDEVICE(ATHEROS, PCI_DEVICE_ID_ATHEROS_AR5212_IBM), ATH_LED(0, 0) },
/* Dell Vostro A860 (shahar@shahar-or.co.il) */
{ ATH_SDEVICE(PCI_VENDOR_ID_QMI, 0x0112), ATH_LED(3, 0) },
{ }
};

View file

@ -2078,7 +2078,7 @@ static void ath_tx_processq(struct ath_softc *sc, struct ath_txq *txq)
&txq->axq_q, lastbf->list.prev);
txq->axq_depth--;
txok = (ds->ds_txstat.ts_status == 0);
txok = !(ds->ds_txstat.ts_status & ATH9K_TXERR_FILT);
txq->axq_tx_inprogress = false;
spin_unlock_bh(&txq->axq_lock);

View file

@ -1784,7 +1784,10 @@ static void b43_do_interrupt_thread(struct b43_wldev *dev)
dma_reason[0], dma_reason[1],
dma_reason[2], dma_reason[3],
dma_reason[4], dma_reason[5]);
b43_controller_restart(dev, "DMA error");
b43err(dev->wl, "This device does not support DMA "
"on your system. Please use PIO instead.\n");
b43err(dev->wl, "CONFIG_B43_FORCE_PIO must be set in "
"your kernel configuration.\n");
return;
}
if (merged_dma_reason & B43_DMAIRQ_NONFATALMASK) {

View file

@ -1353,7 +1353,7 @@ int iwl_tx_agg_stop(struct iwl_priv *priv , const u8 *ra, u16 tid)
if (priv->stations[sta_id].tid[tid].agg.state ==
IWL_EMPTYING_HW_QUEUE_ADDBA) {
IWL_DEBUG_HT(priv, "AGG stop before setup done\n");
ieee80211_stop_tx_ba_cb_irqsafe(priv->hw, ra, tid);
ieee80211_stop_tx_ba_cb_irqsafe(priv->vif, ra, tid);
priv->stations[sta_id].tid[tid].agg.state = IWL_AGG_OFF;
return 0;
}

View file

@ -84,7 +84,8 @@ struct rxd_ops {
int rxd_size;
void (*rxd_init)(void *rxd, dma_addr_t next_dma_addr);
void (*rxd_refill)(void *rxd, dma_addr_t addr, int len);
int (*rxd_process)(void *rxd, struct ieee80211_rx_status *status);
int (*rxd_process)(void *rxd, struct ieee80211_rx_status *status,
__le16 *qos);
};
struct mwl8k_device_info {
@ -184,7 +185,7 @@ struct mwl8k_priv {
/* PHY parameters */
struct ieee80211_supported_band band;
struct ieee80211_channel channels[14];
struct ieee80211_rate rates[13];
struct ieee80211_rate rates[14];
bool radio_on;
bool radio_short_preamble;
@ -220,15 +221,6 @@ struct mwl8k_vif {
u8 bssid[ETH_ALEN];
u8 mac_addr[ETH_ALEN];
/*
* Subset of supported legacy rates.
* Intersection of AP and STA supported rates.
*/
struct ieee80211_rate legacy_rates[13];
/* number of supported legacy rates */
u8 legacy_nrates;
/* Index into station database.Returned by update_sta_db call */
u8 peer_id;
@ -266,6 +258,11 @@ static const struct ieee80211_rate mwl8k_rates[] = {
{ .bitrate = 360, .hw_value = 72, },
{ .bitrate = 480, .hw_value = 96, },
{ .bitrate = 540, .hw_value = 108, },
{ .bitrate = 720, .hw_value = 144, },
};
static const u8 mwl8k_rateids[12] = {
2, 4, 11, 22, 12, 18, 24, 36, 48, 72, 96, 108,
};
/* Set or get info from Firmware */
@ -574,7 +571,7 @@ static int mwl8k_load_firmware(struct ieee80211_hw *hw)
"helper image\n", pci_name(priv->pdev));
return rc;
}
msleep(1);
msleep(5);
rc = mwl8k_feed_fw_image(priv, fw->data, fw->size);
} else {
@ -591,9 +588,8 @@ static int mwl8k_load_firmware(struct ieee80211_hw *hw)
iowrite32(MWL8K_MODE_AP, priv->regs + MWL8K_HIU_GEN_PTR);
else
iowrite32(MWL8K_MODE_STA, priv->regs + MWL8K_HIU_GEN_PTR);
msleep(1);
loops = 200000;
loops = 500000;
do {
u32 ready_code;
@ -633,9 +629,6 @@ struct ewc_ht_info {
/* Peer Entry flags - used to define the type of the peer node */
#define MWL8K_PEER_TYPE_ACCESSPOINT 2
#define MWL8K_IEEE_LEGACY_DATA_RATES 13
#define MWL8K_MCS_BITMAP_SIZE 16
struct peer_capability_info {
/* Peer type - AP vs. STA. */
__u8 peer_type;
@ -652,10 +645,10 @@ struct peer_capability_info {
struct ewc_ht_info ewc_info;
/* Legacy rate table. Intersection of our rates and peer rates. */
__u8 legacy_rates[MWL8K_IEEE_LEGACY_DATA_RATES];
__u8 legacy_rates[12];
/* HT rate table. Intersection of our rates and peer rates. */
__u8 ht_rates[MWL8K_MCS_BITMAP_SIZE];
__u8 ht_rates[16];
__u8 pad[16];
/* If set, interoperability mode, no proprietary extensions. */
@ -706,55 +699,64 @@ static inline u16 mwl8k_qos_setbit_qlen(u16 qos, u8 len)
struct mwl8k_dma_data {
__le16 fwlen;
struct ieee80211_hdr wh;
char data[0];
} __attribute__((packed));
/* Routines to add/remove DMA header from skb. */
static inline void mwl8k_remove_dma_header(struct sk_buff *skb)
static inline void mwl8k_remove_dma_header(struct sk_buff *skb, __le16 qos)
{
struct mwl8k_dma_data *tr = (struct mwl8k_dma_data *)skb->data;
void *dst, *src = &tr->wh;
int hdrlen = ieee80211_hdrlen(tr->wh.frame_control);
u16 space = sizeof(struct mwl8k_dma_data) - hdrlen;
struct mwl8k_dma_data *tr;
int hdrlen;
dst = (void *)tr + space;
if (dst != src) {
memmove(dst, src, hdrlen);
skb_pull(skb, space);
tr = (struct mwl8k_dma_data *)skb->data;
hdrlen = ieee80211_hdrlen(tr->wh.frame_control);
if (hdrlen != sizeof(tr->wh)) {
if (ieee80211_is_data_qos(tr->wh.frame_control)) {
memmove(tr->data - hdrlen, &tr->wh, hdrlen - 2);
*((__le16 *)(tr->data - 2)) = qos;
} else {
memmove(tr->data - hdrlen, &tr->wh, hdrlen);
}
}
if (hdrlen != sizeof(*tr))
skb_pull(skb, sizeof(*tr) - hdrlen);
}
static inline void mwl8k_add_dma_header(struct sk_buff *skb)
{
struct ieee80211_hdr *wh;
u32 hdrlen, pktlen;
int hdrlen;
struct mwl8k_dma_data *tr;
wh = (struct ieee80211_hdr *)skb->data;
hdrlen = ieee80211_hdrlen(wh->frame_control);
pktlen = skb->len;
/*
* Copy up/down the 802.11 header; the firmware requires
* we present a 2-byte payload length followed by a
* 4-address header (w/o QoS), followed (optionally) by
* any WEP/ExtIV header (but only filled in for CCMP).
* Add a firmware DMA header; the firmware requires that we
* present a 2-byte payload length followed by a 4-address
* header (without QoS field), followed (optionally) by any
* WEP/ExtIV header (but only filled in for CCMP).
*/
if (hdrlen != sizeof(struct mwl8k_dma_data))
skb_push(skb, sizeof(struct mwl8k_dma_data) - hdrlen);
wh = (struct ieee80211_hdr *)skb->data;
hdrlen = ieee80211_hdrlen(wh->frame_control);
if (hdrlen != sizeof(*tr))
skb_push(skb, sizeof(*tr) - hdrlen);
if (ieee80211_is_data_qos(wh->frame_control))
hdrlen -= 2;
tr = (struct mwl8k_dma_data *)skb->data;
if (wh != &tr->wh)
memmove(&tr->wh, wh, hdrlen);
/* Clear addr4 */
memset(tr->wh.addr4, 0, ETH_ALEN);
if (hdrlen != sizeof(tr->wh))
memset(((void *)&tr->wh) + hdrlen, 0, sizeof(tr->wh) - hdrlen);
/*
* Firmware length is the length of the fully formed "802.11
* payload". That is, everything except for the 802.11 header.
* This includes all crypto material including the MIC.
*/
tr->fwlen = cpu_to_le16(pktlen - hdrlen);
tr->fwlen = cpu_to_le16(skb->len - sizeof(*tr));
}
@ -779,6 +781,10 @@ struct mwl8k_rxd_8366 {
__u8 rx_ctrl;
} __attribute__((packed));
#define MWL8K_8366_RATE_INFO_MCS_FORMAT 0x80
#define MWL8K_8366_RATE_INFO_40MHZ 0x40
#define MWL8K_8366_RATE_INFO_RATEID(x) ((x) & 0x3f)
#define MWL8K_8366_RX_CTRL_OWNED_BY_HOST 0x80
static void mwl8k_rxd_8366_init(void *_rxd, dma_addr_t next_dma_addr)
@ -800,7 +806,8 @@ static void mwl8k_rxd_8366_refill(void *_rxd, dma_addr_t addr, int len)
}
static int
mwl8k_rxd_8366_process(void *_rxd, struct ieee80211_rx_status *status)
mwl8k_rxd_8366_process(void *_rxd, struct ieee80211_rx_status *status,
__le16 *qos)
{
struct mwl8k_rxd_8366 *rxd = _rxd;
@ -813,9 +820,11 @@ mwl8k_rxd_8366_process(void *_rxd, struct ieee80211_rx_status *status)
status->signal = -rxd->rssi;
status->noise = -rxd->noise_floor;
if (rxd->rate & 0x80) {
if (rxd->rate & MWL8K_8366_RATE_INFO_MCS_FORMAT) {
status->flag |= RX_FLAG_HT;
status->rate_idx = rxd->rate & 0x7f;
if (rxd->rate & MWL8K_8366_RATE_INFO_40MHZ)
status->flag |= RX_FLAG_40MHZ;
status->rate_idx = MWL8K_8366_RATE_INFO_RATEID(rxd->rate);
} else {
int i;
@ -830,6 +839,8 @@ mwl8k_rxd_8366_process(void *_rxd, struct ieee80211_rx_status *status)
status->band = IEEE80211_BAND_2GHZ;
status->freq = ieee80211_channel_to_frequency(rxd->channel);
*qos = rxd->qos_control;
return le16_to_cpu(rxd->pkt_len);
}
@ -888,7 +899,8 @@ static void mwl8k_rxd_8687_refill(void *_rxd, dma_addr_t addr, int len)
}
static int
mwl8k_rxd_8687_process(void *_rxd, struct ieee80211_rx_status *status)
mwl8k_rxd_8687_process(void *_rxd, struct ieee80211_rx_status *status,
__le16 *qos)
{
struct mwl8k_rxd_8687 *rxd = _rxd;
u16 rate_info;
@ -903,7 +915,6 @@ mwl8k_rxd_8687_process(void *_rxd, struct ieee80211_rx_status *status)
status->signal = -rxd->rssi;
status->noise = -rxd->noise_level;
status->qual = rxd->link_quality;
status->antenna = MWL8K_8687_RATE_INFO_ANTSELECT(rate_info);
status->rate_idx = MWL8K_8687_RATE_INFO_RATEID(rate_info);
@ -919,6 +930,8 @@ mwl8k_rxd_8687_process(void *_rxd, struct ieee80211_rx_status *status)
status->band = IEEE80211_BAND_2GHZ;
status->freq = ieee80211_channel_to_frequency(rxd->channel);
*qos = rxd->qos_control;
return le16_to_cpu(rxd->pkt_len);
}
@ -1090,6 +1103,7 @@ static int rxq_process(struct ieee80211_hw *hw, int index, int limit)
void *rxd;
int pkt_len;
struct ieee80211_rx_status status;
__le16 qos;
skb = rxq->buf[rxq->head].skb;
if (skb == NULL)
@ -1097,7 +1111,7 @@ static int rxq_process(struct ieee80211_hw *hw, int index, int limit)
rxd = rxq->rxd + (rxq->head * priv->rxd_ops->rxd_size);
pkt_len = priv->rxd_ops->rxd_process(rxd, &status);
pkt_len = priv->rxd_ops->rxd_process(rxd, &status, &qos);
if (pkt_len < 0)
break;
@ -1115,7 +1129,7 @@ static int rxq_process(struct ieee80211_hw *hw, int index, int limit)
rxq->rxd_count--;
skb_put(skb, pkt_len);
mwl8k_remove_dma_header(skb);
mwl8k_remove_dma_header(skb, qos);
/*
* Check for a pending join operation. Save a
@ -1221,99 +1235,106 @@ static inline void mwl8k_tx_start(struct mwl8k_priv *priv)
ioread32(priv->regs + MWL8K_HIU_INT_CODE);
}
struct mwl8k_txq_info {
u32 fw_owned;
u32 drv_owned;
u32 unused;
u32 len;
u32 head;
u32 tail;
};
static int mwl8k_scan_tx_ring(struct mwl8k_priv *priv,
struct mwl8k_txq_info *txinfo)
static void mwl8k_dump_tx_rings(struct ieee80211_hw *hw)
{
int count, desc, status;
struct mwl8k_tx_queue *txq;
struct mwl8k_tx_desc *tx_desc;
int ndescs = 0;
struct mwl8k_priv *priv = hw->priv;
int i;
memset(txinfo, 0, MWL8K_TX_QUEUES * sizeof(struct mwl8k_txq_info));
for (i = 0; i < MWL8K_TX_QUEUES; i++) {
struct mwl8k_tx_queue *txq = priv->txq + i;
int fw_owned = 0;
int drv_owned = 0;
int unused = 0;
int desc;
for (count = 0; count < MWL8K_TX_QUEUES; count++) {
txq = priv->txq + count;
txinfo[count].len = txq->stats.len;
txinfo[count].head = txq->head;
txinfo[count].tail = txq->tail;
for (desc = 0; desc < MWL8K_TX_DESCS; desc++) {
tx_desc = txq->txd + desc;
status = le32_to_cpu(tx_desc->status);
struct mwl8k_tx_desc *tx_desc = txq->txd + desc;
u32 status;
status = le32_to_cpu(tx_desc->status);
if (status & MWL8K_TXD_STATUS_FW_OWNED)
txinfo[count].fw_owned++;
fw_owned++;
else
txinfo[count].drv_owned++;
drv_owned++;
if (tx_desc->pkt_len == 0)
txinfo[count].unused++;
unused++;
}
}
return ndescs;
printk(KERN_ERR "%s: txq[%d] len=%d head=%d tail=%d "
"fw_owned=%d drv_owned=%d unused=%d\n",
wiphy_name(hw->wiphy), i,
txq->stats.len, txq->head, txq->tail,
fw_owned, drv_owned, unused);
}
}
/*
* Must be called with priv->fw_mutex held and tx queues stopped.
*/
#define MWL8K_TX_WAIT_TIMEOUT_MS 1000
static int mwl8k_tx_wait_empty(struct ieee80211_hw *hw)
{
struct mwl8k_priv *priv = hw->priv;
DECLARE_COMPLETION_ONSTACK(tx_wait);
u32 count;
unsigned long timeout;
int retry;
int rc;
might_sleep();
/*
* The TX queues are stopped at this point, so this test
* doesn't need to take ->tx_lock.
*/
if (!priv->pending_tx_pkts)
return 0;
retry = 0;
rc = 0;
spin_lock_bh(&priv->tx_lock);
count = priv->pending_tx_pkts;
if (count)
priv->tx_wait = &tx_wait;
priv->tx_wait = &tx_wait;
while (!rc) {
int oldcount;
unsigned long timeout;
oldcount = priv->pending_tx_pkts;
spin_unlock_bh(&priv->tx_lock);
timeout = wait_for_completion_timeout(&tx_wait,
msecs_to_jiffies(MWL8K_TX_WAIT_TIMEOUT_MS));
spin_lock_bh(&priv->tx_lock);
if (timeout) {
WARN_ON(priv->pending_tx_pkts);
if (retry) {
printk(KERN_NOTICE "%s: tx rings drained\n",
wiphy_name(hw->wiphy));
}
break;
}
if (priv->pending_tx_pkts < oldcount) {
printk(KERN_NOTICE "%s: timeout waiting for tx "
"rings to drain (%d -> %d pkts), retrying\n",
wiphy_name(hw->wiphy), oldcount,
priv->pending_tx_pkts);
retry = 1;
continue;
}
priv->tx_wait = NULL;
printk(KERN_ERR "%s: tx rings stuck for %d ms\n",
wiphy_name(hw->wiphy), MWL8K_TX_WAIT_TIMEOUT_MS);
mwl8k_dump_tx_rings(hw);
rc = -ETIMEDOUT;
}
spin_unlock_bh(&priv->tx_lock);
if (count) {
struct mwl8k_txq_info txinfo[MWL8K_TX_QUEUES];
int index;
int newcount;
timeout = wait_for_completion_timeout(&tx_wait,
msecs_to_jiffies(5000));
if (timeout)
return 0;
spin_lock_bh(&priv->tx_lock);
priv->tx_wait = NULL;
newcount = priv->pending_tx_pkts;
mwl8k_scan_tx_ring(priv, txinfo);
spin_unlock_bh(&priv->tx_lock);
printk(KERN_ERR "%s(%u) TIMEDOUT:5000ms Pend:%u-->%u\n",
__func__, __LINE__, count, newcount);
for (index = 0; index < MWL8K_TX_QUEUES; index++)
printk(KERN_ERR "TXQ:%u L:%u H:%u T:%u FW:%u "
"DRV:%u U:%u\n",
index,
txinfo[index].len,
txinfo[index].head,
txinfo[index].tail,
txinfo[index].fw_owned,
txinfo[index].drv_owned,
txinfo[index].unused);
return -ETIMEDOUT;
}
return 0;
return rc;
}
#define MWL8K_TXD_SUCCESS(status) \
@ -1361,7 +1382,7 @@ static void mwl8k_txq_reclaim(struct ieee80211_hw *hw, int index, int force)
BUG_ON(skb == NULL);
pci_unmap_single(priv->pdev, addr, size, PCI_DMA_TODEVICE);
mwl8k_remove_dma_header(skb);
mwl8k_remove_dma_header(skb, tx_desc->qos_control);
/* Mark descriptor as unused */
tx_desc->pkt_phys_addr = 0;
@ -1563,8 +1584,8 @@ static void mwl8k_fw_unlock(struct ieee80211_hw *hw)
* Command processing.
*/
/* Timeout firmware commands after 2000ms */
#define MWL8K_CMD_TIMEOUT_MS 2000
/* Timeout firmware commands after 10s */
#define MWL8K_CMD_TIMEOUT_MS 10000
static int mwl8k_post_cmd(struct ieee80211_hw *hw, struct mwl8k_cmd_pkt *cmd)
{
@ -1615,12 +1636,21 @@ static int mwl8k_post_cmd(struct ieee80211_hw *hw, struct mwl8k_cmd_pkt *cmd)
MWL8K_CMD_TIMEOUT_MS);
rc = -ETIMEDOUT;
} else {
int ms;
ms = MWL8K_CMD_TIMEOUT_MS - jiffies_to_msecs(timeout);
rc = cmd->result ? -EINVAL : 0;
if (rc)
printk(KERN_ERR "%s: Command %s error 0x%x\n",
wiphy_name(hw->wiphy),
mwl8k_cmd_name(cmd->code, buf, sizeof(buf)),
le16_to_cpu(cmd->result));
else if (ms > 2000)
printk(KERN_NOTICE "%s: Command %s took %d ms\n",
wiphy_name(hw->wiphy),
mwl8k_cmd_name(cmd->code, buf, sizeof(buf)),
ms);
}
return rc;
@ -2439,8 +2469,6 @@ mwl8k_set_edca_params(struct ieee80211_hw *hw, __u8 qnum,
/*
* CMD_FINALIZE_JOIN.
*/
/* FJ beacon buffer size is compiled into the firmware. */
#define MWL8K_FJ_BEACON_MAXLEN 128
struct mwl8k_cmd_finalize_join {
@ -2450,17 +2478,13 @@ struct mwl8k_cmd_finalize_join {
} __attribute__((packed));
static int mwl8k_finalize_join(struct ieee80211_hw *hw, void *frame,
__u16 framelen, __u16 dtim)
int framelen, int dtim)
{
struct mwl8k_cmd_finalize_join *cmd;
struct ieee80211_mgmt *payload = frame;
u16 hdrlen;
u32 payload_len;
int payload_len;
int rc;
if (frame == NULL)
return -EINVAL;
cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
if (cmd == NULL)
return -ENOMEM;
@ -2469,24 +2493,17 @@ static int mwl8k_finalize_join(struct ieee80211_hw *hw, void *frame,
cmd->header.length = cpu_to_le16(sizeof(*cmd));
cmd->sleep_interval = cpu_to_le32(dtim ? dtim : 1);
hdrlen = ieee80211_hdrlen(payload->frame_control);
payload_len = framelen > hdrlen ? framelen - hdrlen : 0;
/* XXX TBD Might just have to abort and return an error */
if (payload_len > MWL8K_FJ_BEACON_MAXLEN)
printk(KERN_ERR "%s(): WARNING: Incomplete beacon "
"sent to firmware. Sz=%u MAX=%u\n", __func__,
payload_len, MWL8K_FJ_BEACON_MAXLEN);
if (payload_len > MWL8K_FJ_BEACON_MAXLEN)
payload_len = framelen - ieee80211_hdrlen(payload->frame_control);
if (payload_len < 0)
payload_len = 0;
else if (payload_len > MWL8K_FJ_BEACON_MAXLEN)
payload_len = MWL8K_FJ_BEACON_MAXLEN;
if (payload && payload_len)
memcpy(cmd->beacon_data, &payload->u.beacon, payload_len);
memcpy(cmd->beacon_data, &payload->u.beacon, payload_len);
rc = mwl8k_post_cmd(hw, &cmd->header);
kfree(cmd);
return rc;
}
@ -2515,9 +2532,7 @@ static int mwl8k_cmd_update_sta_db(struct ieee80211_hw *hw,
struct ieee80211_bss_conf *info = &mv_vif->bss_info;
struct mwl8k_cmd_update_sta_db *cmd;
struct peer_capability_info *peer_info;
struct ieee80211_rate *bitrates = mv_vif->legacy_rates;
int rc;
__u8 count, *rates;
cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
if (cmd == NULL)
@ -2536,13 +2551,11 @@ static int mwl8k_cmd_update_sta_db(struct ieee80211_hw *hw,
/* Build peer_info block */
peer_info->peer_type = MWL8K_PEER_TYPE_ACCESSPOINT;
peer_info->basic_caps = cpu_to_le16(info->assoc_capability);
memcpy(peer_info->legacy_rates, mwl8k_rateids,
sizeof(mwl8k_rateids));
peer_info->interop = 1;
peer_info->amsdu_enabled = 0;
rates = peer_info->legacy_rates;
for (count = 0; count < mv_vif->legacy_nrates; count++)
rates[count] = bitrates[count].hw_value;
rc = mwl8k_post_cmd(hw, &cmd->header);
if (rc == 0)
mv_vif->peer_id = peer_info->station_id;
@ -2565,8 +2578,6 @@ static int mwl8k_cmd_update_sta_db(struct ieee80211_hw *hw,
/*
* CMD_SET_AID.
*/
#define MWL8K_RATE_INDEX_MAX_ARRAY 14
#define MWL8K_FRAME_PROT_DISABLED 0x00
#define MWL8K_FRAME_PROT_11G 0x07
#define MWL8K_FRAME_PROT_11N_HT_40MHZ_ONLY 0x02
@ -2579,7 +2590,7 @@ struct mwl8k_cmd_update_set_aid {
/* AP's MAC address (BSSID) */
__u8 bssid[ETH_ALEN];
__le16 protection_mode;
__u8 supp_rates[MWL8K_RATE_INDEX_MAX_ARRAY];
__u8 supp_rates[14];
} __attribute__((packed));
static int mwl8k_cmd_set_aid(struct ieee80211_hw *hw,
@ -2588,8 +2599,6 @@ static int mwl8k_cmd_set_aid(struct ieee80211_hw *hw,
struct mwl8k_vif *mv_vif = MWL8K_VIF(vif);
struct ieee80211_bss_conf *info = &mv_vif->bss_info;
struct mwl8k_cmd_update_set_aid *cmd;
struct ieee80211_rate *bitrates = mv_vif->legacy_rates;
int count;
u16 prot_mode;
int rc;
@ -2621,8 +2630,7 @@ static int mwl8k_cmd_set_aid(struct ieee80211_hw *hw,
}
cmd->protection_mode = cpu_to_le16(prot_mode);
for (count = 0; count < mv_vif->legacy_nrates; count++)
cmd->supp_rates[count] = bitrates[count].hw_value;
memcpy(cmd->supp_rates, mwl8k_rateids, sizeof(mwl8k_rateids));
rc = mwl8k_post_cmd(hw, &cmd->header);
kfree(cmd);
@ -2635,20 +2643,17 @@ static int mwl8k_cmd_set_aid(struct ieee80211_hw *hw,
*/
struct mwl8k_cmd_update_rateset {
struct mwl8k_cmd_pkt header;
__u8 legacy_rates[MWL8K_RATE_INDEX_MAX_ARRAY];
__u8 legacy_rates[14];
/* Bitmap for supported MCS codes. */
__u8 mcs_set[MWL8K_IEEE_LEGACY_DATA_RATES];
__u8 reserved[MWL8K_IEEE_LEGACY_DATA_RATES];
__u8 mcs_set[16];
__u8 reserved[16];
} __attribute__((packed));
static int mwl8k_update_rateset(struct ieee80211_hw *hw,
struct ieee80211_vif *vif)
{
struct mwl8k_vif *mv_vif = MWL8K_VIF(vif);
struct mwl8k_cmd_update_rateset *cmd;
struct ieee80211_rate *bitrates = mv_vif->legacy_rates;
int count;
int rc;
cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
@ -2657,9 +2662,7 @@ static int mwl8k_update_rateset(struct ieee80211_hw *hw,
cmd->header.code = cpu_to_le16(MWL8K_CMD_SET_RATE);
cmd->header.length = cpu_to_le16(sizeof(*cmd));
for (count = 0; count < mv_vif->legacy_nrates; count++)
cmd->legacy_rates[count] = bitrates[count].hw_value;
memcpy(cmd->legacy_rates, mwl8k_rateids, sizeof(mwl8k_rateids));
rc = mwl8k_post_cmd(hw, &cmd->header);
kfree(cmd);
@ -2932,11 +2935,6 @@ static int mwl8k_add_interface(struct ieee80211_hw *hw,
/* Back pointer to parent config block */
mwl8k_vif->priv = priv;
/* Setup initial PHY parameters */
memcpy(mwl8k_vif->legacy_rates,
priv->rates, sizeof(mwl8k_vif->legacy_rates));
mwl8k_vif->legacy_nrates = ARRAY_SIZE(priv->rates);
/* Set Initial sequence number to zero */
mwl8k_vif->seqno = 0;
@ -3014,9 +3012,6 @@ static void mwl8k_bss_info_changed(struct ieee80211_hw *hw,
struct mwl8k_vif *mwl8k_vif = MWL8K_VIF(vif);
int rc;
if (changed & BSS_CHANGED_BSSID)
memcpy(mwl8k_vif->bssid, info->bssid, ETH_ALEN);
if ((changed & BSS_CHANGED_ASSOC) == 0)
return;
@ -3030,6 +3025,8 @@ static void mwl8k_bss_info_changed(struct ieee80211_hw *hw,
memcpy(&mwl8k_vif->bss_info, info,
sizeof(struct ieee80211_bss_conf));
memcpy(mwl8k_vif->bssid, info->bssid, ETH_ALEN);
/* Install rates */
rc = mwl8k_update_rateset(hw, vif);
if (rc)
@ -3366,7 +3363,7 @@ static int __devinit mwl8k_probe(struct pci_dev *pdev,
if (rc) {
printk(KERN_ERR "%s: Cannot obtain PCI resources\n",
MWL8K_NAME);
return rc;
goto err_disable_device;
}
pci_set_master(pdev);
@ -3597,6 +3594,8 @@ static int __devinit mwl8k_probe(struct pci_dev *pdev,
err_free_reg:
pci_release_regions(pdev);
err_disable_device:
pci_disable_device(pdev);
return rc;

View file

@ -427,7 +427,7 @@ int hermesi_program_init(hermes_t *hw, u32 offset)
if (err)
return err;
pr_debug(KERN_DEBUG PFX "Enabling volatile, EP 0x%08x\n", offset);
pr_debug(PFX "Enabling volatile, EP 0x%08x\n", offset);
err = hermes_doicmd_wait(hw,
HERMES_PROGRAM_ENABLE_VOLATILE,
offset & 0xFFFFu,

View file

@ -23,6 +23,7 @@
#define RTL8187_EEPROM_TXPWR_CHAN_1 0x16 /* 3 channels */
#define RTL8187_EEPROM_TXPWR_CHAN_6 0x1B /* 2 channels */
#define RTL8187_EEPROM_TXPWR_CHAN_4 0x3D /* 2 channels */
#define RTL8187_EEPROM_SELECT_GPIO 0x3B
#define RTL8187_REQT_READ 0xC0
#define RTL8187_REQT_WRITE 0x40
@ -31,6 +32,9 @@
#define RTL8187_MAX_RX 0x9C4
#define RFKILL_MASK_8187_89_97 0x2
#define RFKILL_MASK_8198 0x4
struct rtl8187_rx_info {
struct urb *urb;
struct ieee80211_hw *dev;
@ -104,6 +108,7 @@ struct rtl8187_priv {
struct delayed_work work;
struct ieee80211_hw *dev;
#ifdef CONFIG_RTL8187_LEDS
struct rtl8187_led led_radio;
struct rtl8187_led led_tx;
struct rtl8187_led led_rx;
struct delayed_work led_on;
@ -122,6 +127,7 @@ struct rtl8187_priv {
u8 noise;
u8 slot_time;
u8 aifsn[4];
u8 rfkill_mask;
struct {
__le64 buf;
struct sk_buff_head queue;

View file

@ -1322,6 +1322,7 @@ static int __devinit rtl8187_probe(struct usb_interface *intf,
struct ieee80211_channel *channel;
const char *chip_name;
u16 txpwr, reg;
u16 product_id = le16_to_cpu(udev->descriptor.idProduct);
int err, i;
dev = ieee80211_alloc_hw(sizeof(*priv), &rtl8187_ops);
@ -1481,6 +1482,13 @@ static int __devinit rtl8187_probe(struct usb_interface *intf,
(*channel++).hw_value = txpwr & 0xFF;
(*channel++).hw_value = txpwr >> 8;
}
/* Handle the differing rfkill GPIO bit in different models */
priv->rfkill_mask = RFKILL_MASK_8187_89_97;
if (product_id == 0x8197 || product_id == 0x8198) {
eeprom_93cx6_read(&eeprom, RTL8187_EEPROM_SELECT_GPIO, &reg);
if (reg & 0xFF00)
priv->rfkill_mask = RFKILL_MASK_8198;
}
/*
* XXX: Once this driver supports anything that requires
@ -1509,9 +1517,9 @@ static int __devinit rtl8187_probe(struct usb_interface *intf,
mutex_init(&priv->conf_mutex);
skb_queue_head_init(&priv->b_tx_status.queue);
printk(KERN_INFO "%s: hwaddr %pM, %s V%d + %s\n",
printk(KERN_INFO "%s: hwaddr %pM, %s V%d + %s, rfkill mask %d\n",
wiphy_name(dev->wiphy), dev->wiphy->perm_addr,
chip_name, priv->asic_rev, priv->rf->name);
chip_name, priv->asic_rev, priv->rf->name, priv->rfkill_mask);
#ifdef CONFIG_RTL8187_LEDS
eeprom_93cx6_read(&eeprom, 0x3F, &reg);

View file

@ -105,19 +105,36 @@ static void rtl8187_led_brightness_set(struct led_classdev *led_dev,
struct rtl8187_led *led = container_of(led_dev, struct rtl8187_led,
led_dev);
struct ieee80211_hw *hw = led->dev;
struct rtl8187_priv *priv = hw->priv;
struct rtl8187_priv *priv;
static bool radio_on;
if (brightness == LED_OFF) {
ieee80211_queue_delayed_work(hw, &priv->led_off, 0);
/* The LED is off for 1/20 sec so that it just blinks. */
ieee80211_queue_delayed_work(hw, &priv->led_on, HZ / 20);
} else
ieee80211_queue_delayed_work(hw, &priv->led_on, 0);
if (!hw)
return;
priv = hw->priv;
if (led->is_radio) {
if (brightness == LED_FULL) {
ieee80211_queue_delayed_work(hw, &priv->led_on, 0);
radio_on = true;
} else if (radio_on) {
radio_on = false;
cancel_delayed_work_sync(&priv->led_on);
ieee80211_queue_delayed_work(hw, &priv->led_off, 0);
}
} else if (radio_on) {
if (brightness == LED_OFF) {
ieee80211_queue_delayed_work(hw, &priv->led_off, 0);
/* The LED is off for 1/20 sec - it just blinks. */
ieee80211_queue_delayed_work(hw, &priv->led_on,
HZ / 20);
} else
ieee80211_queue_delayed_work(hw, &priv->led_on, 0);
}
}
static int rtl8187_register_led(struct ieee80211_hw *dev,
struct rtl8187_led *led, const char *name,
const char *default_trigger, u8 ledpin)
const char *default_trigger, u8 ledpin,
bool is_radio)
{
int err;
struct rtl8187_priv *priv = dev->priv;
@ -128,6 +145,7 @@ static int rtl8187_register_led(struct ieee80211_hw *dev,
return -EINVAL;
led->dev = dev;
led->ledpin = ledpin;
led->is_radio = is_radio;
strncpy(led->name, name, sizeof(led->name));
led->led_dev.name = led->name;
@ -145,7 +163,11 @@ static int rtl8187_register_led(struct ieee80211_hw *dev,
static void rtl8187_unregister_led(struct rtl8187_led *led)
{
struct ieee80211_hw *hw = led->dev;
struct rtl8187_priv *priv = hw->priv;
led_classdev_unregister(&led->led_dev);
flush_delayed_work(&priv->led_off);
led->dev = NULL;
}
@ -182,34 +204,38 @@ void rtl8187_leds_init(struct ieee80211_hw *dev, u16 custid)
INIT_DELAYED_WORK(&priv->led_on, led_turn_on);
INIT_DELAYED_WORK(&priv->led_off, led_turn_off);
snprintf(name, sizeof(name),
"rtl8187-%s::radio", wiphy_name(dev->wiphy));
err = rtl8187_register_led(dev, &priv->led_radio, name,
ieee80211_get_radio_led_name(dev), ledpin, true);
if (err)
return;
snprintf(name, sizeof(name),
"rtl8187-%s::tx", wiphy_name(dev->wiphy));
err = rtl8187_register_led(dev, &priv->led_tx, name,
ieee80211_get_tx_led_name(dev), ledpin);
ieee80211_get_tx_led_name(dev), ledpin, false);
if (err)
goto error;
goto err_tx;
snprintf(name, sizeof(name),
"rtl8187-%s::rx", wiphy_name(dev->wiphy));
err = rtl8187_register_led(dev, &priv->led_rx, name,
ieee80211_get_rx_led_name(dev), ledpin);
if (!err) {
ieee80211_queue_delayed_work(dev, &priv->led_on, 0);
ieee80211_get_rx_led_name(dev), ledpin, false);
if (!err)
return;
}
/* registration of RX LED failed - unregister TX */
/* registration of RX LED failed - unregister */
rtl8187_unregister_led(&priv->led_tx);
error:
/* If registration of either failed, cancel delayed work */
cancel_delayed_work_sync(&priv->led_off);
cancel_delayed_work_sync(&priv->led_on);
err_tx:
rtl8187_unregister_led(&priv->led_radio);
}
void rtl8187_leds_exit(struct ieee80211_hw *dev)
{
struct rtl8187_priv *priv = dev->priv;
/* turn the LED off before exiting */
ieee80211_queue_delayed_work(dev, &priv->led_off, 0);
rtl8187_unregister_led(&priv->led_radio);
rtl8187_unregister_led(&priv->led_rx);
rtl8187_unregister_led(&priv->led_tx);
cancel_delayed_work_sync(&priv->led_off);

View file

@ -47,6 +47,8 @@ struct rtl8187_led {
u8 ledpin;
/* The unique name string for this LED device. */
char name[RTL8187_LED_MAX_NAME_LEN + 1];
/* If the LED is radio or tx/rx */
bool is_radio;
};
void rtl8187_leds_init(struct ieee80211_hw *dev, u16 code);

View file

@ -25,10 +25,10 @@ static bool rtl8187_is_radio_enabled(struct rtl8187_priv *priv)
u8 gpio;
gpio = rtl818x_ioread8(priv, &priv->map->GPIO0);
rtl818x_iowrite8(priv, &priv->map->GPIO0, gpio & ~0x02);
rtl818x_iowrite8(priv, &priv->map->GPIO0, gpio & ~priv->rfkill_mask);
gpio = rtl818x_ioread8(priv, &priv->map->GPIO1);
return gpio & 0x02;
return gpio & priv->rfkill_mask;
}
void rtl8187_rfkill_init(struct ieee80211_hw *hw)

View file

@ -629,10 +629,6 @@ static int wl1251_op_config(struct ieee80211_hw *hw, u32 changed)
goto out_sleep;
}
ret = wl1251_build_null_data(wl);
if (ret < 0)
goto out_sleep;
if (conf->flags & IEEE80211_CONF_PS && !wl->psm_requested) {
wl1251_debug(DEBUG_PSM, "psm enabled");
@ -1110,6 +1106,21 @@ static void wl1251_op_bss_info_changed(struct ieee80211_hw *hw,
if (ret < 0)
goto out;
if (changed & BSS_CHANGED_BSSID) {
memcpy(wl->bssid, bss_conf->bssid, ETH_ALEN);
ret = wl1251_build_null_data(wl);
if (ret < 0)
goto out;
if (wl->bss_type != BSS_TYPE_IBSS) {
ret = wl1251_join(wl, wl->bss_type, wl->channel,
wl->beacon_int, wl->dtim_period);
if (ret < 0)
goto out_sleep;
}
}
if (changed & BSS_CHANGED_ASSOC) {
if (bss_conf->assoc) {
wl->beacon_int = bss_conf->beacon_int;
@ -1169,23 +1180,6 @@ static void wl1251_op_bss_info_changed(struct ieee80211_hw *hw,
}
}
if (changed & BSS_CHANGED_BSSID) {
memcpy(wl->bssid, bss_conf->bssid, ETH_ALEN);
ret = wl1251_build_null_data(wl);
if (ret < 0)
goto out;
if (wl->bss_type != BSS_TYPE_IBSS) {
ret = wl1251_join(wl, wl->bss_type, wl->channel,
wl->beacon_int, wl->dtim_period);
if (ret < 0)
goto out_sleep;
wl1251_warning("Set ctsprotect failed %d", ret);
goto out_sleep;
}
}
if (changed & BSS_CHANGED_BEACON) {
beacon = ieee80211_beacon_get(hw, vif);
ret = wl1251_cmd_template_set(wl, CMD_BEACON, beacon->data,

View file

@ -16,15 +16,23 @@
#include <linux/ioctl.h>
/* The magic IOCTL value for this interface. */
#define GIGASET_IOCTL 0x47
#define GIGVER_DRIVER 0
#define GIGVER_COMPAT 1
#define GIGVER_FWBASE 2
/* enable/disable device control via character device (lock out ISDN subsys) */
#define GIGASET_REDIR _IOWR(GIGASET_IOCTL, 0, int)
#define GIGASET_REDIR _IOWR (GIGASET_IOCTL, 0, int)
#define GIGASET_CONFIG _IOWR (GIGASET_IOCTL, 1, int)
#define GIGASET_BRKCHARS _IOW (GIGASET_IOCTL, 2, unsigned char[6]) //FIXME [6] okay?
#define GIGASET_VERSION _IOWR (GIGASET_IOCTL, 3, unsigned[4])
/* enable adapter configuration mode (M10x only) */
#define GIGASET_CONFIG _IOWR(GIGASET_IOCTL, 1, int)
/* set break characters (M105 only) */
#define GIGASET_BRKCHARS _IOW(GIGASET_IOCTL, 2, unsigned char[6])
/* get version information selected by arg[0] */
#define GIGASET_VERSION _IOWR(GIGASET_IOCTL, 3, unsigned[4])
/* values for GIGASET_VERSION arg[0] */
#define GIGVER_DRIVER 0 /* get driver version */
#define GIGVER_COMPAT 1 /* get interface compatibility version */
#define GIGVER_FWBASE 2 /* get base station firmware version */
#endif

View file

@ -137,8 +137,6 @@ extern struct ctl_table ether_table[];
extern ssize_t sysfs_format_mac(char *buf, const unsigned char *addr, int len);
#define MAC_FMT "%02x:%02x:%02x:%02x:%02x:%02x"
#define MAC_BUF_SIZE 18
#define DECLARE_MAC_BUF(var) char var[MAC_BUF_SIZE]
#endif

View file

@ -46,7 +46,7 @@ extern asmlinkage long compat_sys_sendmsg(int,struct compat_msghdr __user *,unsi
extern asmlinkage long compat_sys_recvmsg(int,struct compat_msghdr __user *,unsigned);
extern asmlinkage long compat_sys_recvmmsg(int, struct compat_mmsghdr __user *,
unsigned, unsigned,
struct timespec __user *);
struct compat_timespec __user *);
extern asmlinkage long compat_sys_getsockopt(int, int, int, char __user *, int __user *);
extern int put_cmsg_compat(struct msghdr*, int, int, int, void *);

View file

@ -53,7 +53,7 @@ static inline int inet6_sk_ehashfn(const struct sock *sk)
return inet6_ehashfn(net, laddr, lport, faddr, fport);
}
extern void __inet6_hash(struct sock *sk);
extern int __inet6_hash(struct sock *sk, struct inet_timewait_sock *twp);
/*
* Sockets in TCP_CLOSE state are _always_ taken out of the hash, so

View file

@ -251,7 +251,7 @@ extern void inet_put_port(struct sock *sk);
void inet_hashinfo_init(struct inet_hashinfo *h);
extern void __inet_hash_nolisten(struct sock *sk);
extern int __inet_hash_nolisten(struct sock *sk, struct inet_timewait_sock *tw);
extern void inet_hash(struct sock *sk);
extern void inet_unhash(struct sock *sk);
@ -391,10 +391,12 @@ static inline struct sock *__inet_lookup_skb(struct inet_hashinfo *hashinfo,
}
extern int __inet_hash_connect(struct inet_timewait_death_row *death_row,
struct sock *sk, u32 port_offset,
struct sock *sk,
u32 port_offset,
int (*check_established)(struct inet_timewait_death_row *,
struct sock *, __u16, struct inet_timewait_sock **),
void (*hash)(struct sock *sk));
int (*hash)(struct sock *sk, struct inet_timewait_sock *twp));
extern int inet_hash_connect(struct inet_timewait_death_row *death_row,
struct sock *sk);
#endif /* _INET_HASHTABLES_H */

View file

@ -201,6 +201,9 @@ extern void inet_twsk_put(struct inet_timewait_sock *tw);
extern int inet_twsk_unhash(struct inet_timewait_sock *tw);
extern int inet_twsk_bind_unhash(struct inet_timewait_sock *tw,
struct inet_hashinfo *hashinfo);
extern struct inet_timewait_sock *inet_twsk_alloc(const struct sock *sk,
const int state);

View file

@ -1261,29 +1261,6 @@ static inline struct sk_buff *tcp_write_queue_prev(struct sock *sk, struct sk_bu
#define tcp_for_write_queue_from_safe(skb, tmp, sk) \
skb_queue_walk_from_safe(&(sk)->sk_write_queue, skb, tmp)
/* This function calculates a "timeout" which is equivalent to the timeout of a
* TCP connection after "boundary" unsuccessful, exponentially backed-off
* retransmissions with an initial RTO of TCP_RTO_MIN.
*/
static inline bool retransmits_timed_out(const struct sock *sk,
unsigned int boundary)
{
unsigned int timeout, linear_backoff_thresh;
if (!inet_csk(sk)->icsk_retransmits)
return false;
linear_backoff_thresh = ilog2(TCP_RTO_MAX/TCP_RTO_MIN);
if (boundary <= linear_backoff_thresh)
timeout = ((2 << boundary) - 1) * TCP_RTO_MIN;
else
timeout = ((2 << linear_backoff_thresh) - 1) * TCP_RTO_MIN +
(boundary - linear_backoff_thresh) * TCP_RTO_MAX;
return (tcp_time_stamp - tcp_sk(sk)->retrans_stamp) >= timeout;
}
static inline struct sk_buff *tcp_send_head(struct sock *sk)
{
return sk->sk_send_head;

View file

@ -554,6 +554,12 @@ static const struct net_device_ops br2684_netdev_ops = {
.ndo_validate_addr = eth_validate_addr,
};
static const struct net_device_ops br2684_netdev_ops_routed = {
.ndo_start_xmit = br2684_start_xmit,
.ndo_set_mac_address = br2684_mac_addr,
.ndo_change_mtu = eth_change_mtu
};
static void br2684_setup(struct net_device *netdev)
{
struct br2684_dev *brdev = BRPRIV(netdev);
@ -569,11 +575,10 @@ static void br2684_setup(struct net_device *netdev)
static void br2684_setup_routed(struct net_device *netdev)
{
struct br2684_dev *brdev = BRPRIV(netdev);
brdev->net_dev = netdev;
netdev->hard_header_len = 0;
netdev->netdev_ops = &br2684_netdev_ops;
netdev->netdev_ops = &br2684_netdev_ops_routed;
netdev->addr_len = 0;
netdev->mtu = 1500;
netdev->type = ARPHRD_PPP;

View file

@ -62,7 +62,6 @@ static int lec_open(struct net_device *dev);
static netdev_tx_t lec_start_xmit(struct sk_buff *skb,
struct net_device *dev);
static int lec_close(struct net_device *dev);
static void lec_init(struct net_device *dev);
static struct lec_arp_table *lec_arp_find(struct lec_priv *priv,
const unsigned char *mac_addr);
static int lec_arp_remove(struct lec_priv *priv,
@ -670,13 +669,6 @@ static const struct net_device_ops lec_netdev_ops = {
.ndo_set_multicast_list = lec_set_multicast_list,
};
static void lec_init(struct net_device *dev)
{
dev->netdev_ops = &lec_netdev_ops;
printk("%s: Initialized!\n", dev->name);
}
static const unsigned char lec_ctrl_magic[] = {
0xff,
0x00,
@ -893,6 +885,7 @@ static int lecd_attach(struct atm_vcc *vcc, int arg)
dev_lec[i] = alloc_etherdev(size);
if (!dev_lec[i])
return -ENOMEM;
dev_lec[i]->netdev_ops = &lec_netdev_ops;
snprintf(dev_lec[i]->name, IFNAMSIZ, "lec%d", i);
if (register_netdev(dev_lec[i])) {
free_netdev(dev_lec[i]);
@ -901,7 +894,6 @@ static int lecd_attach(struct atm_vcc *vcc, int arg)
priv = netdev_priv(dev_lec[i]);
priv->is_trdev = is_trdev;
lec_init(dev_lec[i]);
} else {
priv = netdev_priv(dev_lec[i]);
if (priv->lecd)

View file

@ -754,26 +754,21 @@ asmlinkage long compat_sys_recvfrom(int fd, void __user *buf, size_t len,
asmlinkage long compat_sys_recvmmsg(int fd, struct compat_mmsghdr __user *mmsg,
unsigned vlen, unsigned int flags,
struct timespec __user *timeout)
struct compat_timespec __user *timeout)
{
int datagrams;
struct timespec ktspec;
struct compat_timespec __user *utspec;
if (timeout == NULL)
return __sys_recvmmsg(fd, (struct mmsghdr __user *)mmsg, vlen,
flags | MSG_CMSG_COMPAT, NULL);
utspec = (struct compat_timespec __user *)timeout;
if (get_user(ktspec.tv_sec, &utspec->tv_sec) ||
get_user(ktspec.tv_nsec, &utspec->tv_nsec))
if (get_compat_timespec(&ktspec, timeout))
return -EFAULT;
datagrams = __sys_recvmmsg(fd, (struct mmsghdr __user *)mmsg, vlen,
flags | MSG_CMSG_COMPAT, &ktspec);
if (datagrams > 0 &&
(put_user(ktspec.tv_sec, &utspec->tv_sec) ||
put_user(ktspec.tv_nsec, &utspec->tv_nsec)))
if (datagrams > 0 && put_compat_timespec(&ktspec, timeout))
datagrams = -EFAULT;
return datagrams;

View file

@ -4771,21 +4771,23 @@ static void net_set_todo(struct net_device *dev)
static void rollback_registered_many(struct list_head *head)
{
struct net_device *dev;
struct net_device *dev, *tmp;
BUG_ON(dev_boot_phase);
ASSERT_RTNL();
list_for_each_entry(dev, head, unreg_list) {
list_for_each_entry_safe(dev, tmp, head, unreg_list) {
/* Some devices call without registering
* for initialization unwind.
* for initialization unwind. Remove those
* devices and proceed with the remaining.
*/
if (dev->reg_state == NETREG_UNINITIALIZED) {
pr_debug("unregister_netdevice: device %s/%p never "
"was registered\n", dev->name, dev);
WARN_ON(1);
return;
list_del(&dev->unreg_list);
continue;
}
BUG_ON(dev->reg_state != NETREG_REGISTERED);

View file

@ -408,7 +408,7 @@ struct sock *dccp_v4_request_recv_sock(struct sock *sk, struct sk_buff *skb,
dccp_sync_mss(newsk, dst_mtu(dst));
__inet_hash_nolisten(newsk);
__inet_hash_nolisten(newsk, NULL);
__inet_inherit_port(sk, newsk);
return newsk;

View file

@ -46,7 +46,7 @@ static void dccp_v6_hash(struct sock *sk)
return;
}
local_bh_disable();
__inet6_hash(sk);
__inet6_hash(sk, NULL);
local_bh_enable();
}
}
@ -644,7 +644,7 @@ static struct sock *dccp_v6_request_recv_sock(struct sock *sk,
newinet->inet_daddr = newinet->inet_saddr = LOOPBACK4_IPV6;
newinet->inet_rcv_saddr = LOOPBACK4_IPV6;
__inet6_hash(newsk);
__inet6_hash(newsk, NULL);
__inet_inherit_port(sk, newsk);
return newsk;

View file

@ -351,12 +351,13 @@ static inline u32 inet_sk_port_offset(const struct sock *sk)
inet->inet_dport);
}
void __inet_hash_nolisten(struct sock *sk)
int __inet_hash_nolisten(struct sock *sk, struct inet_timewait_sock *tw)
{
struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
struct hlist_nulls_head *list;
spinlock_t *lock;
struct inet_ehash_bucket *head;
int twrefcnt = 0;
WARN_ON(!sk_unhashed(sk));
@ -367,8 +368,13 @@ void __inet_hash_nolisten(struct sock *sk)
spin_lock(lock);
__sk_nulls_add_node_rcu(sk, list);
if (tw) {
WARN_ON(sk->sk_hash != tw->tw_hash);
twrefcnt = inet_twsk_unhash(tw);
}
spin_unlock(lock);
sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
return twrefcnt;
}
EXPORT_SYMBOL_GPL(__inet_hash_nolisten);
@ -378,7 +384,7 @@ static void __inet_hash(struct sock *sk)
struct inet_listen_hashbucket *ilb;
if (sk->sk_state != TCP_LISTEN) {
__inet_hash_nolisten(sk);
__inet_hash_nolisten(sk, NULL);
return;
}
@ -427,7 +433,7 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
struct sock *sk, u32 port_offset,
int (*check_established)(struct inet_timewait_death_row *,
struct sock *, __u16, struct inet_timewait_sock **),
void (*hash)(struct sock *sk))
int (*hash)(struct sock *sk, struct inet_timewait_sock *twp))
{
struct inet_hashinfo *hinfo = death_row->hashinfo;
const unsigned short snum = inet_sk(sk)->inet_num;
@ -435,6 +441,7 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
struct inet_bind_bucket *tb;
int ret;
struct net *net = sock_net(sk);
int twrefcnt = 1;
if (!snum) {
int i, remaining, low, high, port;
@ -493,13 +500,18 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
inet_bind_hash(sk, tb, port);
if (sk_unhashed(sk)) {
inet_sk(sk)->inet_sport = htons(port);
hash(sk);
twrefcnt += hash(sk, tw);
}
if (tw)
twrefcnt += inet_twsk_bind_unhash(tw, hinfo);
spin_unlock(&head->lock);
if (tw) {
inet_twsk_deschedule(tw, death_row);
inet_twsk_put(tw);
while (twrefcnt) {
twrefcnt--;
inet_twsk_put(tw);
}
}
ret = 0;
@ -510,7 +522,7 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
tb = inet_csk(sk)->icsk_bind_hash;
spin_lock_bh(&head->lock);
if (sk_head(&tb->owners) == sk && !sk->sk_bind_node.next) {
hash(sk);
hash(sk, NULL);
spin_unlock_bh(&head->lock);
return 0;
} else {

View file

@ -15,9 +15,13 @@
#include <net/ip.h>
/*
* unhash a timewait socket from established hash
* lock must be hold by caller
/**
* inet_twsk_unhash - unhash a timewait socket from established hash
* @tw: timewait socket
*
* unhash a timewait socket from established hash, if hashed.
* ehash lock must be held by caller.
* Returns 1 if caller should call inet_twsk_put() after lock release.
*/
int inet_twsk_unhash(struct inet_timewait_sock *tw)
{
@ -26,6 +30,37 @@ int inet_twsk_unhash(struct inet_timewait_sock *tw)
hlist_nulls_del_rcu(&tw->tw_node);
sk_nulls_node_init(&tw->tw_node);
/*
* We cannot call inet_twsk_put() ourself under lock,
* caller must call it for us.
*/
return 1;
}
/**
* inet_twsk_bind_unhash - unhash a timewait socket from bind hash
* @tw: timewait socket
* @hashinfo: hashinfo pointer
*
* unhash a timewait socket from bind hash, if hashed.
* bind hash lock must be held by caller.
* Returns 1 if caller should call inet_twsk_put() after lock release.
*/
int inet_twsk_bind_unhash(struct inet_timewait_sock *tw,
struct inet_hashinfo *hashinfo)
{
struct inet_bind_bucket *tb = tw->tw_tb;
if (!tb)
return 0;
__hlist_del(&tw->tw_bind_node);
tw->tw_tb = NULL;
inet_bind_bucket_destroy(hashinfo->bind_bucket_cachep, tb);
/*
* We cannot call inet_twsk_put() ourself under lock,
* caller must call it for us.
*/
return 1;
}
@ -34,7 +69,6 @@ static void __inet_twsk_kill(struct inet_timewait_sock *tw,
struct inet_hashinfo *hashinfo)
{
struct inet_bind_hashbucket *bhead;
struct inet_bind_bucket *tb;
int refcnt;
/* Unlink from established hashes. */
spinlock_t *lock = inet_ehash_lockp(hashinfo, tw->tw_hash);
@ -46,15 +80,11 @@ static void __inet_twsk_kill(struct inet_timewait_sock *tw,
/* Disassociate with bind bucket. */
bhead = &hashinfo->bhash[inet_bhashfn(twsk_net(tw), tw->tw_num,
hashinfo->bhash_size)];
spin_lock(&bhead->lock);
tb = tw->tw_tb;
if (tb) {
__hlist_del(&tw->tw_bind_node);
tw->tw_tb = NULL;
inet_bind_bucket_destroy(hashinfo->bind_bucket_cachep, tb);
refcnt++;
}
refcnt += inet_twsk_bind_unhash(tw, hashinfo);
spin_unlock(&bhead->lock);
#ifdef SOCK_REFCNT_DEBUG
if (atomic_read(&tw->tw_refcnt) != 1) {
printk(KERN_DEBUG "%s timewait_sock %p refcnt=%d\n",
@ -126,7 +156,7 @@ void __inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk,
/*
* Notes :
* - We initially set tw_refcnt to 0 in inet_twsk_alloc()
* - We initially set tw_refcnt to 0 in inet_twsk_alloc()
* - We add one reference for the bhash link
* - We add one reference for the ehash link
* - We want this refcnt update done before allowing other
@ -136,7 +166,6 @@ void __inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk,
spin_unlock(lock);
}
EXPORT_SYMBOL_GPL(__inet_twsk_hashdance);
struct inet_timewait_sock *inet_twsk_alloc(const struct sock *sk, const int state)
@ -177,7 +206,6 @@ struct inet_timewait_sock *inet_twsk_alloc(const struct sock *sk, const int stat
return tw;
}
EXPORT_SYMBOL_GPL(inet_twsk_alloc);
/* Returns non-zero if quota exceeded. */
@ -256,7 +284,6 @@ void inet_twdr_hangman(unsigned long data)
out:
spin_unlock(&twdr->death_lock);
}
EXPORT_SYMBOL_GPL(inet_twdr_hangman);
void inet_twdr_twkill_work(struct work_struct *work)
@ -287,7 +314,6 @@ void inet_twdr_twkill_work(struct work_struct *work)
spin_unlock_bh(&twdr->death_lock);
}
}
EXPORT_SYMBOL_GPL(inet_twdr_twkill_work);
/* These are always called from BH context. See callers in
@ -307,7 +333,6 @@ void inet_twsk_deschedule(struct inet_timewait_sock *tw,
spin_unlock(&twdr->death_lock);
__inet_twsk_kill(tw, twdr->hashinfo);
}
EXPORT_SYMBOL(inet_twsk_deschedule);
void inet_twsk_schedule(struct inet_timewait_sock *tw,
@ -388,7 +413,6 @@ void inet_twsk_schedule(struct inet_timewait_sock *tw,
mod_timer(&twdr->tw_timer, jiffies + twdr->period);
spin_unlock(&twdr->death_lock);
}
EXPORT_SYMBOL_GPL(inet_twsk_schedule);
void inet_twdr_twcal_tick(unsigned long data)
@ -449,7 +473,6 @@ void inet_twdr_twcal_tick(unsigned long data)
#endif
spin_unlock(&twdr->death_lock);
}
EXPORT_SYMBOL_GPL(inet_twdr_twcal_tick);
void inet_twsk_purge(struct inet_hashinfo *hashinfo,

View file

@ -2540,11 +2540,6 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
ctd.tcpct_cookie_desired = cvp->cookie_desired;
ctd.tcpct_s_data_desired = cvp->s_data_desired;
/* Cookie(s) saved, return as nonce */
if (sizeof(ctd.tcpct_value) < cvp->cookie_pair_size) {
/* impossible? */
return -EINVAL;
}
memcpy(&ctd.tcpct_value[0], &cvp->cookie_pair[0],
cvp->cookie_pair_size);
ctd.tcpct_used = cvp->cookie_pair_size;

View file

@ -2717,6 +2717,35 @@ static void tcp_try_undo_dsack(struct sock *sk)
}
}
/* We can clear retrans_stamp when there are no retransmissions in the
* window. It would seem that it is trivially available for us in
* tp->retrans_out, however, that kind of assumptions doesn't consider
* what will happen if errors occur when sending retransmission for the
* second time. ...It could the that such segment has only
* TCPCB_EVER_RETRANS set at the present time. It seems that checking
* the head skb is enough except for some reneging corner cases that
* are not worth the effort.
*
* Main reason for all this complexity is the fact that connection dying
* time now depends on the validity of the retrans_stamp, in particular,
* that successive retransmissions of a segment must not advance
* retrans_stamp under any conditions.
*/
static int tcp_any_retrans_done(struct sock *sk)
{
struct tcp_sock *tp = tcp_sk(sk);
struct sk_buff *skb;
if (tp->retrans_out)
return 1;
skb = tcp_write_queue_head(sk);
if (unlikely(skb && TCP_SKB_CB(skb)->sacked & TCPCB_EVER_RETRANS))
return 1;
return 0;
}
/* Undo during fast recovery after partial ACK. */
static int tcp_try_undo_partial(struct sock *sk, int acked)
@ -2729,7 +2758,7 @@ static int tcp_try_undo_partial(struct sock *sk, int acked)
/* Plain luck! Hole if filled with delayed
* packet, rather than with a retransmit.
*/
if (tp->retrans_out == 0)
if (!tcp_any_retrans_done(sk))
tp->retrans_stamp = 0;
tcp_update_reordering(sk, tcp_fackets_out(tp) + acked, 1);
@ -2788,7 +2817,7 @@ static void tcp_try_keep_open(struct sock *sk)
struct tcp_sock *tp = tcp_sk(sk);
int state = TCP_CA_Open;
if (tcp_left_out(tp) || tp->retrans_out || tp->undo_marker)
if (tcp_left_out(tp) || tcp_any_retrans_done(sk) || tp->undo_marker)
state = TCP_CA_Disorder;
if (inet_csk(sk)->icsk_ca_state != state) {
@ -2803,7 +2832,7 @@ static void tcp_try_to_open(struct sock *sk, int flag)
tcp_verify_left_out(tp);
if (!tp->frto_counter && tp->retrans_out == 0)
if (!tp->frto_counter && !tcp_any_retrans_done(sk))
tp->retrans_stamp = 0;
if (flag & FLAG_ECE)

View file

@ -1464,7 +1464,7 @@ struct sock *tcp_v4_syn_recv_sock(struct sock *sk, struct sk_buff *skb,
}
#endif
__inet_hash_nolisten(newsk);
__inet_hash_nolisten(newsk, NULL);
__inet_inherit_port(sk, newsk);
return newsk;

View file

@ -132,6 +132,35 @@ static void tcp_mtu_probing(struct inet_connection_sock *icsk, struct sock *sk)
}
}
/* This function calculates a "timeout" which is equivalent to the timeout of a
* TCP connection after "boundary" unsucessful, exponentially backed-off
* retransmissions with an initial RTO of TCP_RTO_MIN.
*/
static bool retransmits_timed_out(struct sock *sk,
unsigned int boundary)
{
unsigned int timeout, linear_backoff_thresh;
unsigned int start_ts;
if (!inet_csk(sk)->icsk_retransmits)
return false;
if (unlikely(!tcp_sk(sk)->retrans_stamp))
start_ts = TCP_SKB_CB(tcp_write_queue_head(sk))->when;
else
start_ts = tcp_sk(sk)->retrans_stamp;
linear_backoff_thresh = ilog2(TCP_RTO_MAX/TCP_RTO_MIN);
if (boundary <= linear_backoff_thresh)
timeout = ((2 << boundary) - 1) * TCP_RTO_MIN;
else
timeout = ((2 << linear_backoff_thresh) - 1) * TCP_RTO_MIN +
(boundary - linear_backoff_thresh) * TCP_RTO_MAX;
return (tcp_time_stamp - start_ts) >= timeout;
}
/* A write timeout has occurred. Process the after effects. */
static int tcp_write_timeout(struct sock *sk)
{

View file

@ -22,9 +22,10 @@
#include <net/inet6_hashtables.h>
#include <net/ip.h>
void __inet6_hash(struct sock *sk)
int __inet6_hash(struct sock *sk, struct inet_timewait_sock *tw)
{
struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
int twrefcnt = 0;
WARN_ON(!sk_unhashed(sk));
@ -45,10 +46,15 @@ void __inet6_hash(struct sock *sk)
lock = inet_ehash_lockp(hashinfo, hash);
spin_lock(lock);
__sk_nulls_add_node_rcu(sk, list);
if (tw) {
WARN_ON(sk->sk_hash != tw->tw_hash);
twrefcnt = inet_twsk_unhash(tw);
}
spin_unlock(lock);
}
sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
return twrefcnt;
}
EXPORT_SYMBOL(__inet6_hash);

View file

@ -96,7 +96,7 @@ static void tcp_v6_hash(struct sock *sk)
return;
}
local_bh_disable();
__inet6_hash(sk);
__inet6_hash(sk, NULL);
local_bh_enable();
}
}
@ -1496,7 +1496,7 @@ static struct sock * tcp_v6_syn_recv_sock(struct sock *sk, struct sk_buff *skb,
}
#endif
__inet6_hash(newsk);
__inet6_hash(newsk, NULL);
__inet_inherit_port(sk, newsk);
return newsk;

View file

@ -1193,6 +1193,7 @@ static struct xfrm_state * pfkey_msg2xfrm_state(struct net *net,
x->aalg->alg_key_len = key->sadb_key_bits;
memcpy(x->aalg->alg_key, key+1, keysize);
}
x->aalg->alg_trunc_len = a->uinfo.auth.icv_truncbits;
x->props.aalgo = sa->sadb_sa_auth;
/* x->algo.flags = sa->sadb_sa_flags; */
}

View file

@ -354,7 +354,8 @@ static void sta_set_sinfo(struct sta_info *sta, struct station_info *sinfo)
sinfo->rx_packets = sta->rx_packets;
sinfo->tx_packets = sta->tx_packets;
if (sta->local->hw.flags & IEEE80211_HW_SIGNAL_DBM) {
if ((sta->local->hw.flags & IEEE80211_HW_SIGNAL_DBM) ||
(sta->local->hw.flags & IEEE80211_HW_SIGNAL_UNSPEC)) {
sinfo->filled |= STATION_INFO_SIGNAL;
sinfo->signal = (s8)sta->last_signal;
}

View file

@ -746,6 +746,7 @@ struct ieee80211_local {
unsigned int wmm_acm; /* bit field of ACM bits (BIT(802.1D tag)) */
bool pspolling;
bool scan_ps_enabled;
/*
* PS can only be enabled when we have exactly one managed
* interface (and monitors) in PS, this then points there.

View file

@ -427,7 +427,7 @@ int ieee80211_new_mesh_header(struct ieee80211s_hdr *meshhdr,
char *addr5, char *addr6)
{
int aelen = 0;
memset(meshhdr, 0, sizeof(meshhdr));
memset(meshhdr, 0, sizeof(*meshhdr));
meshhdr->ttl = sdata->u.mesh.mshcfg.dot11MeshTTL;
put_unaligned(cpu_to_le32(sdata->u.mesh.mesh_seqnum), &meshhdr->seqnum);
sdata->u.mesh.mesh_seqnum++;

View file

@ -188,8 +188,9 @@ struct mesh_rmc {
*/
#define MESH_PREQ_MIN_INT 10
#define MESH_DIAM_TRAVERSAL_TIME 50
/* Paths will be refreshed if they are closer than PATH_REFRESH_TIME to their
* expiration
/* A path will be refreshed if it is used PATH_REFRESH_TIME milliseconds before
* timing out. This way it will remain ACTIVE and no data frames will be
* unnecesarily held in the pending queue.
*/
#define MESH_PATH_REFRESH_TIME 1000
#define MESH_MIN_DISCOVERY_TIMEOUT (2 * MESH_DIAM_TRAVERSAL_TIME)

View file

@ -937,7 +937,7 @@ int mesh_nexthop_lookup(struct sk_buff *skb,
if (mpath->flags & MESH_PATH_ACTIVE) {
if (time_after(jiffies,
mpath->exp_time +
mpath->exp_time -
msecs_to_jiffies(sdata->u.mesh.mshcfg.path_refresh_time)) &&
!memcmp(sdata->dev->dev_addr, hdr->addr4, ETH_ALEN) &&
!(mpath->flags & MESH_PATH_RESOLVING) &&

View file

@ -1083,8 +1083,6 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
ieee80211_set_wmm_default(sdata);
ieee80211_recalc_idle(local);
/* channel(_type) changes are handled by ieee80211_hw_config */
local->oper_channel_type = NL80211_CHAN_NO_HT;
@ -1370,6 +1368,7 @@ ieee80211_rx_mgmt_deauth(struct ieee80211_sub_if_data *sdata,
if (!wk) {
ieee80211_set_disassoc(sdata, true);
ieee80211_recalc_idle(sdata->local);
} else {
list_del(&wk->list);
kfree(wk);
@ -1403,6 +1402,7 @@ ieee80211_rx_mgmt_disassoc(struct ieee80211_sub_if_data *sdata,
sdata->dev->name, mgmt->sa, reason_code);
ieee80211_set_disassoc(sdata, false);
ieee80211_recalc_idle(sdata->local);
return RX_MGMT_CFG80211_DISASSOC;
}
@ -2117,6 +2117,7 @@ static void ieee80211_sta_work(struct work_struct *work)
" after %dms, disconnecting.\n",
bssid, (1000 * IEEE80211_PROBE_WAIT)/HZ);
ieee80211_set_disassoc(sdata, true);
ieee80211_recalc_idle(local);
mutex_unlock(&ifmgd->mtx);
/*
* must be outside lock due to cfg80211,
@ -2560,6 +2561,8 @@ int ieee80211_mgd_deauth(struct ieee80211_sub_if_data *sdata,
IEEE80211_STYPE_DEAUTH, req->reason_code,
cookie);
ieee80211_recalc_idle(sdata->local);
return 0;
}
@ -2592,5 +2595,8 @@ int ieee80211_mgd_disassoc(struct ieee80211_sub_if_data *sdata,
ieee80211_send_deauth_disassoc(sdata, req->bss->bssid,
IEEE80211_STYPE_DISASSOC, req->reason_code,
cookie);
ieee80211_recalc_idle(sdata->local);
return 0;
}

View file

@ -1712,7 +1712,6 @@ ieee80211_rx_h_mesh_fwding(struct ieee80211_rx_data *rx)
mpp_path_add(proxied_addr, mpp_addr, sdata);
} else {
spin_lock_bh(&mppath->state_lock);
mppath->exp_time = jiffies;
if (compare_ether_addr(mppath->mpp, mpp_addr) != 0)
memcpy(mppath->mpp, mpp_addr, ETH_ALEN);
spin_unlock_bh(&mppath->state_lock);

View file

@ -227,7 +227,8 @@ static bool ieee80211_prep_hw_scan(struct ieee80211_local *local)
static void ieee80211_scan_ps_enable(struct ieee80211_sub_if_data *sdata)
{
struct ieee80211_local *local = sdata->local;
bool ps = false;
local->scan_ps_enabled = false;
/* FIXME: what to do when local->pspolling is true? */
@ -235,12 +236,13 @@ static void ieee80211_scan_ps_enable(struct ieee80211_sub_if_data *sdata)
cancel_work_sync(&local->dynamic_ps_enable_work);
if (local->hw.conf.flags & IEEE80211_CONF_PS) {
ps = true;
local->scan_ps_enabled = true;
local->hw.conf.flags &= ~IEEE80211_CONF_PS;
ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_PS);
}
if (!ps || !(local->hw.flags & IEEE80211_HW_PS_NULLFUNC_STACK))
if (!(local->scan_ps_enabled) ||
!(local->hw.flags & IEEE80211_HW_PS_NULLFUNC_STACK))
/*
* If power save was enabled, no need to send a nullfunc
* frame because AP knows that we are sleeping. But if the
@ -261,7 +263,7 @@ static void ieee80211_scan_ps_disable(struct ieee80211_sub_if_data *sdata)
if (!local->ps_sdata)
ieee80211_send_nullfunc(local, sdata, 0);
else {
else if (local->scan_ps_enabled) {
/*
* In !IEEE80211_HW_PS_NULLFUNC_STACK case the hardware
* will send a nullfunc frame with the powersave bit set
@ -277,6 +279,16 @@ static void ieee80211_scan_ps_disable(struct ieee80211_sub_if_data *sdata)
*/
local->hw.conf.flags |= IEEE80211_CONF_PS;
ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_PS);
} else if (local->hw.conf.dynamic_ps_timeout > 0) {
/*
* If IEEE80211_CONF_PS was not set and the dynamic_ps_timer
* had been running before leaving the operating channel,
* restart the timer now and send a nullfunc frame to inform
* the AP that we are awake.
*/
ieee80211_send_nullfunc(local, sdata, 0);
mod_timer(&local->dynamic_ps_timer, jiffies +
msecs_to_jiffies(local->hw.conf.dynamic_ps_timeout));
}
}

View file

@ -579,7 +579,7 @@ u32 ieee802_11_parse_elems_crc(u8 *start, size_t len,
if (elen > left)
break;
if (calc_crc && id < 64 && (filter & BIT(id)))
if (calc_crc && id < 64 && (filter & (1ULL << id)))
crc = crc32_be(crc, pos - 2, elen + 2);
switch (id) {

View file

@ -579,6 +579,8 @@ static ssize_t rfkill_name_show(struct device *dev,
static const char *rfkill_get_type_str(enum rfkill_type type)
{
BUILD_BUG_ON(NUM_RFKILL_TYPES != RFKILL_TYPE_FM + 1);
switch (type) {
case RFKILL_TYPE_WLAN:
return "wlan";
@ -597,8 +599,6 @@ static const char *rfkill_get_type_str(enum rfkill_type type)
default:
BUG();
}
BUILD_BUG_ON(NUM_RFKILL_TYPES != RFKILL_TYPE_FM + 1);
}
static ssize_t rfkill_type_show(struct device *dev,

View file

@ -141,62 +141,35 @@ static const struct ieee80211_regdomain us_regdom = {
.reg_rules = {
/* IEEE 802.11b/g, channels 1..11 */
REG_RULE(2412-10, 2462+10, 40, 6, 27, 0),
/* IEEE 802.11a, channel 36 */
REG_RULE(5180-10, 5180+10, 40, 6, 23, 0),
/* IEEE 802.11a, channel 40 */
REG_RULE(5200-10, 5200+10, 40, 6, 23, 0),
/* IEEE 802.11a, channel 44 */
REG_RULE(5220-10, 5220+10, 40, 6, 23, 0),
/* IEEE 802.11a, channel 36..48 */
REG_RULE(5180-10, 5240+10, 40, 6, 17, 0),
/* IEEE 802.11a, channels 48..64 */
REG_RULE(5240-10, 5320+10, 40, 6, 23, 0),
REG_RULE(5260-10, 5320+10, 40, 6, 20, NL80211_RRF_DFS),
/* IEEE 802.11a, channels 100..124 */
REG_RULE(5500-10, 5590+10, 40, 6, 20, NL80211_RRF_DFS),
/* IEEE 802.11a, channels 132..144 */
REG_RULE(5660-10, 5700+10, 40, 6, 20, NL80211_RRF_DFS),
/* IEEE 802.11a, channels 149..165, outdoor */
REG_RULE(5745-10, 5825+10, 40, 6, 30, 0),
}
};
static const struct ieee80211_regdomain jp_regdom = {
.n_reg_rules = 3,
.n_reg_rules = 6,
.alpha2 = "JP",
.reg_rules = {
/* IEEE 802.11b/g, channels 1..14 */
REG_RULE(2412-10, 2484+10, 40, 6, 20, 0),
/* IEEE 802.11a, channels 34..48 */
REG_RULE(5170-10, 5240+10, 40, 6, 20,
NL80211_RRF_PASSIVE_SCAN),
/* IEEE 802.11b/g, channels 1..11 */
REG_RULE(2412-10, 2462+10, 40, 6, 20, 0),
/* IEEE 802.11b/g, channels 12..13 */
REG_RULE(2467-10, 2472+10, 20, 6, 20, 0),
/* IEEE 802.11b/g, channel 14 */
REG_RULE(2484-10, 2484+10, 20, 6, 20, NL80211_RRF_NO_OFDM),
/* IEEE 802.11a, channels 36..48 */
REG_RULE(5180-10, 5240+10, 40, 6, 20, 0),
/* IEEE 802.11a, channels 52..64 */
REG_RULE(5260-10, 5320+10, 40, 6, 20,
NL80211_RRF_NO_IBSS |
NL80211_RRF_DFS),
}
};
static const struct ieee80211_regdomain eu_regdom = {
.n_reg_rules = 6,
/*
* This alpha2 is bogus, we leave it here just for stupid
* backward compatibility
*/
.alpha2 = "EU",
.reg_rules = {
/* IEEE 802.11b/g, channels 1..13 */
REG_RULE(2412-10, 2472+10, 40, 6, 20, 0),
/* IEEE 802.11a, channel 36 */
REG_RULE(5180-10, 5180+10, 40, 6, 23,
NL80211_RRF_PASSIVE_SCAN),
/* IEEE 802.11a, channel 40 */
REG_RULE(5200-10, 5200+10, 40, 6, 23,
NL80211_RRF_PASSIVE_SCAN),
/* IEEE 802.11a, channel 44 */
REG_RULE(5220-10, 5220+10, 40, 6, 23,
NL80211_RRF_PASSIVE_SCAN),
/* IEEE 802.11a, channels 48..64 */
REG_RULE(5240-10, 5320+10, 40, 6, 20,
NL80211_RRF_NO_IBSS |
NL80211_RRF_DFS),
/* IEEE 802.11a, channels 100..140 */
REG_RULE(5500-10, 5700+10, 40, 6, 30,
NL80211_RRF_NO_IBSS |
NL80211_RRF_DFS),
REG_RULE(5260-10, 5320+10, 40, 6, 20, NL80211_RRF_DFS),
/* IEEE 802.11a, channels 100..144 */
REG_RULE(5500-10, 5700+10, 40, 6, 23, NL80211_RRF_DFS),
}
};
@ -206,15 +179,17 @@ static const struct ieee80211_regdomain *static_regdom(char *alpha2)
return &us_regdom;
if (alpha2[0] == 'J' && alpha2[1] == 'P')
return &jp_regdom;
/* Use world roaming rules for "EU", since it was a pseudo
domain anyway... */
if (alpha2[0] == 'E' && alpha2[1] == 'U')
return &eu_regdom;
/* Default, as per the old rules */
return &us_regdom;
return &world_regdom;
/* Default, world roaming rules */
return &world_regdom;
}
static bool is_old_static_regdom(const struct ieee80211_regdomain *rd)
{
if (rd == &us_regdom || rd == &jp_regdom || rd == &eu_regdom)
if (rd == &us_regdom || rd == &jp_regdom || rd == &world_regdom)
return true;
return false;
}

View file

@ -479,6 +479,7 @@ static int __cfg80211_set_encryption(struct cfg80211_registered_device *rdev,
}
err = rdev->ops->del_key(&rdev->wiphy, dev, idx, addr);
}
wdev->wext.connect.privacy = false;
/*
* Applications using wireless extensions expect to be
* able to delete keys that don't exist, so allow that.