qdisc: validate frames going through the direct_xmit path

In commit 50cbe9ab5f ("net: Validate xmit SKBs right when we
pull them out of the qdisc") the validation code was moved out of
dev_hard_start_xmit and into dequeue_skb.

However this overlooked the fact that we do not always enqueue
the skb onto a qdisc. First situation is if qdisc have flag
TCQ_F_CAN_BYPASS and qdisc is empty.  Second situation is if
there is no qdisc on the device, which is a common case for
software devices.

Originally spotted and inital patch by Alexander Duyck.
As a result Alex was seeing issues trying to connect to a
vhost_net interface after commit 50cbe9ab5f was applied.

Added a call to validate_xmit_skb() in __dev_xmit_skb(), in the
code path for qdiscs with TCQ_F_CAN_BYPASS flag, and in
__dev_queue_xmit() when no qdisc.

Also handle the error situation where dev_hard_start_xmit() could
return a skb list, and does not return dev_xmit_complete(rc) and
falls through to the kfree_skb(), in that situation it should
call kfree_skb_list().

Fixes:  50cbe9ab5f ("net: Validate xmit SKBs right when we pull them out of the qdisc")
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Jesper Dangaard Brouer 2014-09-03 17:56:09 +02:00 committed by David S. Miller
parent 3f3c7eec60
commit 1f59533f9c

View file

@ -2739,7 +2739,8 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q,
qdisc_bstats_update(q, skb); qdisc_bstats_update(q, skb);
if (sch_direct_xmit(skb, q, dev, txq, root_lock)) { skb = validate_xmit_skb(skb, dev);
if (skb && sch_direct_xmit(skb, q, dev, txq, root_lock)) {
if (unlikely(contended)) { if (unlikely(contended)) {
spin_unlock(&q->busylock); spin_unlock(&q->busylock);
contended = false; contended = false;
@ -2879,6 +2880,10 @@ static int __dev_queue_xmit(struct sk_buff *skb, void *accel_priv)
if (__this_cpu_read(xmit_recursion) > RECURSION_LIMIT) if (__this_cpu_read(xmit_recursion) > RECURSION_LIMIT)
goto recursion_alert; goto recursion_alert;
skb = validate_xmit_skb(skb, dev);
if (!skb)
goto drop;
HARD_TX_LOCK(dev, txq, cpu); HARD_TX_LOCK(dev, txq, cpu);
if (!netif_xmit_stopped(txq)) { if (!netif_xmit_stopped(txq)) {
@ -2904,10 +2909,11 @@ static int __dev_queue_xmit(struct sk_buff *skb, void *accel_priv)
} }
rc = -ENETDOWN; rc = -ENETDOWN;
drop:
rcu_read_unlock_bh(); rcu_read_unlock_bh();
atomic_long_inc(&dev->tx_dropped); atomic_long_inc(&dev->tx_dropped);
kfree_skb(skb); kfree_skb_list(skb);
return rc; return rc;
out: out:
rcu_read_unlock_bh(); rcu_read_unlock_bh();