[NET_SCHED]: Making rate table lookups more flexible.

This is done in order to, add support to changing the rate table to
use the upper-boundry L2T (length to time) value. Currently we use the
lower-boundry, which result in under-estimating the actual bandwidth
usage.

Extend the tc_ratespec struct, with two parameters: 1) "cell_align"
that allow adjusting the alignment of the rate table. 2) "overhead"
that allow adding a packet overhead before the lookup.

Signed-off-by: Jesper Dangaard Brouer <hawk@comx.dk>
Acked-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Jesper Dangaard Brouer 2007-09-12 16:36:28 +02:00 committed by David S. Miller
parent e9bef55d3d
commit e08b09983f
2 changed files with 5 additions and 3 deletions

View file

@ -77,8 +77,8 @@ struct tc_ratespec
{
unsigned char cell_log;
unsigned char __reserved;
unsigned short feature;
short addend;
unsigned short overhead;
short cell_align;
unsigned short mpu;
__u32 rate;
};

View file

@ -307,7 +307,9 @@ static inline int qdisc_reshape_fail(struct sk_buff *skb, struct Qdisc *sch)
*/
static inline u32 qdisc_l2t(struct qdisc_rate_table* rtab, unsigned int pktlen)
{
int slot = pktlen;
int slot = pktlen + rtab->rate.cell_align + rtab->rate.overhead;
if (slot < 0)
slot = 0;
slot >>= rtab->rate.cell_log;
if (slot > 255)
return (rtab->data[255]*(slot >> 8) + rtab->data[slot & 0xFF]);