使用ingress qdisc和ifb进行qos

时间:2021-09-06 17:53:09

ifb


The Intermediate Functional Block device is the successor to the IMQ iptables module that was never integrated. Advantage over current IMQ; cleaner in particular in SMP; with a _lot_ less code. Old Dummy device functionality is preserved while new one only kicks in if you use actions.
To use an IFB, you must have IFB support in your kernel (configuration option CONFIG_IFB). Assuming that you have a modular kernel, the name of the IFB module is 'ifb' and may be loaded using the command modprobe ifb (if you have modprobe installed) or insmod /path/to/module/ifb.
 ip link set ifb0 up
 ip link set ifb1 up

By default, two IFB devices (ifb0 and ifb1) are created

IFB Usage

As far as i know the reasons listed below is why people use IMQ. It would be nice to know of anything else that i missed.

  • qdiscs/policies that are per device as opposed to system wide. IMQ allows for sharing.
  • Allows for queueing incoming traffic for shaping instead of dropping. I am not aware of any study that shows policing is worse than shaping in achieving the end goal of rate control. I would be interested if anyone is experimenting. (re shaping vs policing: the desire for shaping comes more from the need to have complex rules like with htb)
  • Very interesting use: if you are serving p2p you may wanna give preference to your own localy originated traffic (when responses come back) vs someone using your system to do bittorent. So QoSing based on state comes in as the solution. What people did to achieve this was stick the IMQ somewhere prelocal hook. I think this is a pretty neat feature to have in Linux in general. (i.e not just for IMQ).

But I wont go back to putting netfilter hooks in the device to satisfy this. I also dont think its worth it hacking ifb some more to be
aware of say L3 info and play ip rule tricks to achieve this.

Instead the plan is to have a contrack
related action. This action will selectively either query/create
contrack state on incoming packets. Packets could then be redirected to
ifb based on what happens (e.g. on incoming packets);
if we find they are of known state we could send to a different queue
than one which didnt have existing state. This all however is dependent
on whatever rules the admin enters.

At the moment this function does not
exist yet. I have decided instead of sitting on the patch to release it
and then if theres pressure i will add this feature.

What you can do with ifb currently with actions

What you can do with ifb currently with actions

Lets say you are policing packets from alias 192.168.200.200/32 you dont want those to exceed 100kbps going out.

tc filter add dev eth0 parent 1: protocol ip prio 10 u32    match ip src 192.168.200.200/32 flowid 1:2   action police rate 100kbit burst
90k drop

If you run tcpdump on eth0 you will see all packets going out with src 192.168.200.200/32 dropped or not

Extend the rule a little to see only the ones that made it out:

tc filter add dev eth0 parent 1: protocol ip prio 10 u32  match ip src 192.168.200.200/32 flowid 1:2   action police rate 10kbit burst 90k
drop  action mirred egress mirror dev ifb0

Now fire tcpdump on ifb0 to see only those packets ..

tcpdump -n -i ifb0 -x -e -t

Essentially a good debugging/logging interface.

If you replace mirror with redirect,
those packets will be blackholed and will never make it out. This
redirect behavior changes with new patch (but not the mirror).

IFB Example

Many readers have found this page to be unhelpful in terms of expressing how IFB is useful and how it should be used usefully.

These examples are taken from a posting of Jamal at http://www.mail-archive.com/netdev@vger.kernel.org/msg04900.html

What this script will demonstrate is the following sequence:

    any packet coming going out on eth0 10.0.0.229 is classified as class 1:10 and redirected to ifb0.
        on reaching ifb0 the packet is classified as class 1:2
        subjected to a token buffer shaping of rate 20kbit/s
        sent back to eth0
    on coming back to eth0, the classificaction 1:10 is still valid and
this packet is put through an HTB classifier which limits the rate to
256Kbps

What this script will demonstrate is the following sequence:
1) any packet coming going out on eth0 10.0.0.229 is classified as
class 1:10 and redirected to ifb0.
2) a) on reaching ifb0 the packet is classified as class 1:2
   b) subjected to a token buffer shaping of rate 20kbit/s
   c) sent back to eth0
3) on coming back to eth0, the classificaction 1:10 is still valid
and this packet is put through an HTB classifier which limits the rate
to 256Kbps

export TC="/sbin/tc"

$TC qdisc del dev ifb0 root handle 1: prio
$TC qdisc add dev ifb0 root handle 1: prio
$TC qdisc add dev ifb0 parent 1:1 handle 10: sfq
$TC qdisc add dev ifb0 parent 1:2 handle 20: tbf \
rate 20kbit buffer 1600 limit 3000
$TC qdisc add dev ifb0 parent 1:3 handle 30: sfq                               
$TC filter add dev ifb0 parent 1: protocol ip prio 1 u32 \
match ip dst 11.0.0.0/24 flowid 1:1
$TC filter add dev ifb0 parent 1: protocol ip prio 2 u32 \
match ip dst 10.0.0.0/24 flowid 1:2

ifconfig ifb0 up

$TC qdisc del dev eth0 root handle 1: htb default 2
$TC qdisc add dev eth0 root handle 1: htb default 2
$TC class add dev eth0 parent 1: classid 1:1 htb rate 800Kbit
$TC class add dev eth0 parent 1: classid 1:2 htb rate 800Kbit
$TC class add dev eth0 parent 1:1 classid 1:10 htb rate 256kbit ceil 384kbit
$TC class add dev eth0 parent 1:1 classid 1:20 htb rate 512kbit ceil 648kbit
$TC filter add dev eth0 parent 1: protocol ip prio 1 u32 \
match ip dst 10.0.0.229/32 flowid 1:10 \
action mirred egress redirect dev ifb0

A Little test (be careful if you are sshed in and are classifying on
that IP, counters may be not easy to follow)
-----

A ping ...

mambo:~# ping -c2 10.0.0.229

// first at ifb0

// observe that second filter twice being successful

mambo:~# $TC -s filter show dev ifb0 parent 1:
filter protocol ip pref 1 u32
filter protocol ip pref 1 u32 fh 800: ht divisor 1
filter protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid
1:1  (rule hit 2 success 0)
  match 0b000000/ffffff00 at 16 (success 0 )
filter protocol ip pref 2 u32
filter protocol ip pref 2 u32 fh 801: ht divisor 1
filter protocol ip pref 2 u32 fh 801::800 order 2048 key ht 801 bkt 0 flowid
1:2  (rule hit 2 success 2)
  match 0a000000/ffffff00 at 16 (success 2 )

//next the qdisc numbers ..
//Observe that 1:2 has 2 packets

mambo:~# $TC -s qdisc show dev ifb0
qdisc prio 1: bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 196 bytes 2 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 10: parent 1:1 limit 128p quantum 1514b
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
qdisc tbf 20: parent 1:2 rate 20000bit burst 1599b lat 546.9ms
Sent 196 bytes 2 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 30: parent 1:3 limit 128p quantum 1514b
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0

// Next look at eth0
// observe class 1:10 which is where the pings went through after
// they came back from the ifb0 device.

mambo:~# $TC -s class show dev eth0
class htb 1:1 root rate 800000bit ceil 800000bit burst 1699b cburst 1699b
Sent 196 bytes 2 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
lended: 0 borrowed: 0 giants: 0
tokens: 16425 ctokens: 16425

class htb 1:10 parent 1:1 prio 0 rate 256000bit ceil 384000bit burst 1631b
cburst 1647b
Sent 196 bytes 2 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
lended: 2 borrowed: 0 giants: 0
tokens: 49152 ctokens: 33110

class htb 1:2 root prio 0 rate 800000bit ceil 800000bit burst 1699b cburst 1699b
Sent 47714 bytes 321 pkt (dropped 0, overlimits 0 requeues 0)
rate 3920bit 3pps backlog 0b 0p requeues 0
lended: 321 borrowed: 0 giants: 0
tokens: 16262 ctokens: 16262

class htb 1:20 parent 1:1 prio 0 rate 512000bit ceil 648000bit burst 1663b
cburst 1680b
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
lended: 0 borrowed: 0 giants: 0
tokens: 26624 ctokens: 21251

-----
mambo:~# $TC -s filter show dev eth0 parent 1:
filter protocol ip pref 1 u32
filter protocol ip pref 1 u32 fh 800: ht divisor 1
filter protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid
1:10  (rule hit 235 success 4)
  match 0a0000e5/ffffffff at 16 (success 4 )
        action order 1: mirred (Egress Redirect to device ifb0) stolen
        index 2 ref 1 bind 1 installed 114 sec used 100 sec
        Action statistics:
        Sent 196 bytes 2 pkt (dropped 0, overlimits 0 requeues 0)
        rate 0bit 0pps backlog 0b 0p requeues 0

IFB requirements



In order to use ifb you need:

Support for ifb on kernel (2.6.20 works OK)
        Menu option: Device drivers -> Network device support -> Intermediate Functional Block support
        Module name: ifb
    Tc iproute2 with support of "actions" (2.6.20 - 20070313 works OK
and package from Debian etch is outdated). You can download it from
here: http://developer.osdl.org/dev/iproute2/download/

Ingress qdisc


All qdiscs discussed so far are egress qdiscs. Each interface however
can also have an ingress qdisc which is not used to send packets out to
the network adaptor. Instead, it allows you to apply tc filters to
packets coming in over the interface, regardless
of whether they have a local destination or are to be forwarded.

As the tc filters contain a full Token Bucket Filter implementation, and
are also able to match on the kernel flow estimator, there is a lot of
functionality available. This effectively allows you to police incoming
traffic, before it even enters the IP stack.

14.4.1. Parameters & usage

The ingress qdisc itself does not require any parameters. It differs
from other qdiscs in that it does not occupy the root of a device.
Attach it like this:

# delete original
tc qdisc del dev eth0 ingress
tc qdisc del dev eth0 root

# add new qdisc and filter
tc qdisc add dev eth0 ingress
tc filter add dev eth0 parent ffff: protocol ip prio 50  u32 match ip src 0.0.0.0/0 police rate 2048kbps burst 1m drop flowid :1
tc qdisc add dev eth0 root tbf rate 2048kbps latency 50ms burst 1m
I played a bit with the ingress qdisc after seeing Patrick and Stef
talking about it and came up with a few notes and a few questions.

: The ingress qdisc itself has no parameters.  The only thing you can do
: is using the policers.  I have a link with a patch to extend this :
http://www.cyberus.ca/~hadi/patches/action/ Maybe this can help.
:
: I have some more info about ingress in my mail files, but I have to
: sort it out and put it somewhere on docum.org.  But I still didn't
: found the the time to do so.

Regarding policers and the ingress qdisc.  I have never used them before
today, but have the following understanding.

About the ingress qdisc:

  - ingress qdisc (known as "ffff:") can't have any children classes     (hence the existence of IMQ)
  - the only thing you can do with the ingress qdisc is attach filters

About filtering on the ingress qdisc:

- since there are no classes to which to direct the packets,
the only reasonable option (reasonable, indeed!) is to drop the packets
  - with clever use of filtering, you can limit particular traffic signatures to particular uses of your bandwidth


QoS Using ifb and ingress qdisc

Add some qdisc/class/filter to eth0/ifb0/ifb1
tc qdisc add dev eth0 ingress 2>/dev/null

# ingress filter
tc filter add dev eth0 parent ffff: protocol ip prio 10 u32 match u32 0 0 flowid 1:1  action mirred egress redirect dev ifb0
# egress filter
tc filter add dev eth0 parent 1: protocol ip prio 10 u32 match u32 0 0 flowid 1:1  action mirred egress redirect dev ifb1