This patch randomizes the port selected on bind() for connections
to help with possible security attacks. It should also be faster
in most cases because there is no need for a global lock.
Signed-off-by: Stephen Hemminger <shemminger@osdl.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
Changes IP_ECN_set_ce() and IP6_ECN_set_ce() to return 0 if the CE
bits could not bet set because none of the ECT bits are set or 1
if the CE bits are already set or have been successfully set.
Introduces INET_ECN_set_ce(skb) to enable CE bits for all supported
protocols.
Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
Extracts the RED algorithm from sch_red.c and puts it into include/net/red.h
for use by other RED based modules. The statistics are extended to be more
fine grained in order to differ between probability/forced marks/drops.
We now reset the average queue length when setting new parameters, leaving
it might result in an unreasonable qavg for a while depending on the value of W.
Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
Rename SCTP specific control message flags to use SCTP_ prefix rather than
MSG_ prefix as per the latest sctp sockets API draft.
Signed-off-by: Ivan Skytte Jorgensen <isj-sctp@i1.dk>
Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>
This patch moves rfcomm_crc_table[] into the RFCOMM core, because there
is no need to keep it in a separate file.
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Now that we've switched over to storing MTUs in the xfrm_dst entries,
we no longer need the dst's get_mss methods. This patch gets rid of
them.
It also documents the fact that our MTU calculation is not optimal
for ESP.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
On architectures where the char type defaults to unsigned some of the
arithmetic in the AX.25 stack to fail, resulting in some packets being dropped
on receive.
Credits for tracking this down and the original patch to
Bob Brose N0QBJ <linuxhams@n0qbj-11.ampr.org>.
Signed-off-by: Ralf Baechle DL5RB <ralf@linux-mips.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
This is required to avoid unloading a module that has active timewait
sockets, such as DCCP.
Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
- added typedef unsigned int __nocast gfp_t;
- replaced __nocast uses for gfp flags with gfp_t - it gives exactly
the same warnings as far as sparse is concerned, doesn't change
generated code (from gcc point of view we replaced unsigned int with
typedef) and documents what's going on far better.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Adds alignment attribute to a few structures used with SCTP socket
options so that the sizes and offsets remain the same when built using
either 32 or 64 bit tools.
Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The old socket options are marked with a _OLD suffix so that the
existing 32-bit apps on 32-bit kernels do not break.
Signed-off-by: Ivan Skytte Jrgensen <isj-sctp@i1.dk>
Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Here is a patch that adds a helper called xfrm_policy_id2dir to
document the fact that the policy direction can be and is derived
from the index.
This is based on a patch by YOSHIFUJI Hideaki and 210313105@suda.edu.cn.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix implicit nocast warnings in xfrm code:
net/xfrm/xfrm_policy.c:232:47: warning: implicit cast to nocast type
Signed-off-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
From: Randy Dunlap <rdunlap@xenotime.net>
Fix implicit nocast warnings in ip_vs code:
net/ipv4/ipvs/ip_vs_app.c:631:54: warning: implicit cast to nocast type
Signed-off-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix implicit nocast warnings in decnet code:
net/decnet/af_decnet.c:458:40: warning: implicit cast to nocast type
net/decnet/dn_nsp_out.c:125:35: warning: implicit cast to nocast type
net/decnet/dn_nsp_out.c:219:29: warning: implicit cast to nocast type
Signed-off-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
No need to align struct inet_ehash_bucket on a 8 bytes boundary.
On 32 bits Uniprocessor, that's a waste of 4 bytes per struct (50 %)
On other platforms, the attribute is useless, natual alignement is already 8.
platform | Size before | Size after patch
-------------+-------------+------------------
32 bits, UP | 8 | 4
32 bits, SMP | 8 | 8
64 bits, UP | 8 | 8
64 bits, SMP | 16 | 16
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Arnaldo and I agreed it could be applied now, because I have other
pending patches depending on this one (Thank you Arnaldo)
(The other important patch moves skc_refcnt in a separate cache line,
so that the SMP/NUMA performance doesnt suffer from cache line ping pongs)
1) First some performance data :
--------------------------------
tcp_v4_rcv() wastes a *lot* of time in __inet_lookup_established()
The most time critical code is :
sk_for_each(sk, node, &head->chain) {
if (INET_MATCH(sk, acookie, saddr, daddr, ports, dif))
goto hit; /* You sunk my battleship! */
}
The sk_for_each() does use prefetch() hints but only the begining of
"struct sock" is prefetched.
As INET_MATCH first comparison uses inet_sk(__sk)->daddr, wich is far
away from the begining of "struct sock", it has to bring into CPU
cache cold cache line. Each iteration has to use at least 2 cache
lines.
This can be problematic if some chains are very long.
2) The goal
-----------
The idea I had is to change things so that INET_MATCH() may return
FALSE in 99% of cases only using the data already in the CPU cache,
using one cache line per iteration.
3) Description of the patch
---------------------------
Adds a new 'unsigned int skc_hash' field in 'struct sock_common',
filling a 32 bits hole on 64 bits platform.
struct sock_common {
unsigned short skc_family;
volatile unsigned char skc_state;
unsigned char skc_reuse;
int skc_bound_dev_if;
struct hlist_node skc_node;
struct hlist_node skc_bind_node;
atomic_t skc_refcnt;
+ unsigned int skc_hash;
struct proto *skc_prot;
};
Store in this 32 bits field the full hash, not masked by (ehash_size -
1) Using this full hash as the first comparison done in INET_MATCH
permits us immediatly skip the element without touching a second cache
line in case of a miss.
Suppress the sk_hashent/tw_hashent fields since skc_hash (aliased to
sk_hash and tw_hash) already contains the slot number if we mask with
(ehash_size - 1)
File include/net/inet_hashtables.h
64 bits platforms :
#define INET_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\
(((__sk)->sk_hash == (__hash))
((*((__u64 *)&(inet_sk(__sk)->daddr)))== (__cookie)) && \
((*((__u32 *)&(inet_sk(__sk)->dport))) == (__ports)) && \
(!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif))))
32bits platforms:
#define TCP_IPV4_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\
(((__sk)->sk_hash == (__hash)) && \
(inet_sk(__sk)->daddr == (__saddr)) && \
(inet_sk(__sk)->rcv_saddr == (__daddr)) && \
(!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif))))
- Adds a prefetch(head->chain.first) in
__inet_lookup_established()/__tcp_v4_check_established() and
__inet6_lookup_established()/__tcp_v6_check_established() and
__dccp_v4_check_established() to bring into cache the first element of the
list, before the {read|write}_lock(&head->lock);
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Acked-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
header, and I also added the ieee80211_is_cck_rate counterpart.
Various drivers currently create there own version of these functions,
but I guess the ieee80211 stack is the best place to provide such
routines.
Signed-off-by: Ivo van Doorn <IvDoorn@gmail.com>
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Changed crypto method from requiring a struct ieee80211_device reference
to the init handler. Instead we now have a get/set flags method for
each crypto component.
Setting of TKIP countermeasures can now be done via
set_flags(IEEE80211_CRYPTO_TKIP_COUNTERMEASURES)
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree 0254e7c97cece038cd11b47a16027c6379e464fe
parent a84f7713dc87ca1b51c6d53b391087663425a080
author James Ketrenos <jketreno@linux.intel.com> 1126661324 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127319069 -0500
Updated based on Michael Wu's patch and comments sent to netdev.
Added IE comments to ieee80211_* frame structures.
Changed reason_code to reason (consistency)
Removed info_element from ieee80211_disassoc
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree de81b55e78e85997642c651ea677078d0554a14f
parent c8030da8c159f8b82712172a6748a42523aea83a
author James Ketrenos <jketreno@linux.intel.com> 1127104380 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127315225 -0500
Added handle_deauth() callback.
Enhanced crypt_{tkip,ccmp} to support varying splits of HW/SW offload.
Changed channel freq to u32 from u16.
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree c1b50ac5d2d1f9b727c39c6bd86a7872f25a1127
parent 1bb997a3ac7dd1941e02426d2f70bd28993a82b7
author James Ketrenos <jketreno@linux.intel.com> 1126720779 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127314674 -0500
Added subsystem version string and reporting via MODULE_VERSION and
pritnk during load.
NOTE: This is the version support split out from patch 24/29 of the
prior series.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
Borrowing the structure of TCP/IP for this. On the receive of new connections I
was bh_lock_socking the _new_ sock, not the listening one, duh, now it survives
the ssh connections storm I've been using to test this specific bug.
Also fixes send side skb sock accounting.
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
So as to set the newly created sk_buff ->dev member with it, that way we stop
using dev_base->next, that is the wrong thing to do, as there may well be
several interfaces being used with LLC. This was not such a big problem after
all as most of the users of llc_alloc_frame were setting the correct dev, but
this way code is reduced.
This also fixes another bug in llc_station_ac_send_null_dsap_xid_c, that was
not setting the skb->dev field.
Signed-off-by: Jochen Friedrich <jochen@scram.de>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
Remove old WIRELESS_EXT version compatibility
In-tree doesn't need to maintain backward compatibility.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree 0d3e41e574fcb41b9da7f0b7e1d27ec350726654
parent dbe2885fe2f454d538eaaabefc741ded1026f476
author James Ketrenos <jketreno@linux.intel.com> 1126720499 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127314531 -0500
Updated copyright dates.
NOTE: This is a split out of just the copyright updates from patch
24/29 in the prior series.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree 5c7559a1216ae1121487f6aed94a6017490729b3
parent c1ff4c22e5622c8987bf96c09158c4924cde98c2
author Hong Liu <hong.liu@intel.com> 1125482767 +0800
committer James Ketrenos <jketreno@linux.intel.com> 1127314427 -0500
Mixed PTK/GTK CCMP/TKIP support.
Signed-off-by: Hong Liu <hong.liu@intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>