The FireBrick provides a way to bond or load share traffic over multiple links. The terms are not that consistent in the industry so are explained in more detail here.
Load sharing is a feature of the stateful session tracking firewall. When a session is established there are controls in the route-override and rule-set configuration that allow port and IP mappings and routing via specific gateways.
Load sharing is where there is a choice of more than one setting to apply and that choice is made by a random probability using a weighting in the configuration.
Once the choice is made it applies to the whole session. A good example is where the FireBrick is in front of a set of web servers.
One IP address is used which goes to the FireBrick and a choice is made about where to map the session to one of the web servers that is sharing the load.
Load sharing does not allow a single session to go faster than the link it is using but can allow multiple sessions to be spread over multiple lines to achieve a greater average speed.
This makes it very useful for shared offices using multiple internet connections. You can load share multiple internet connections like this with no support from the ISP as each session is just using one of the links as normal.
At the point the decision is made there is no way to know how much traffic will be used by a session, so the choice is made on chance and not on any measure of current load levels on each link.
To use load sharing you use the share object within rule objects within rule-set or route-override objects. The weighting attribute controls the bias for selecting which share to apply.
Bonding is different to load sharing. It applies where more than one link is available and packets can be sent down any of the links. The packets are not modified to do this, so the links have to be able to support the source and destination addresses of the packets via any of the links.
The choice of which link to use is based on the current load on each link with reference to a configured speed limit per link.
This allows traffic to be carefully balanced to make full use of different speed links.
Bonding operates where the traffic is routed to multiple instances of the same type of route with the same metric, such as multiple FB105 tunnels, L2TP tunnels or PPPoE links.
In each case, the link can have a speed defined, either directly in the configuration of the link or by means of a named graph/shaper.
It is also possible to do bonding of Ethernet gateway routes by defining routes with a speed setting. Use the route definition for this rather than specifying a gateway on a subnet.
Simply include multiple routes each with a speed. When bonding links the source address will be the same for traffic on the links. It is important to ensure that when bonding links to an ISP, the ISP must allow the source IP through their ingress filtering. To bond downlink on DSL lines, you need the ISP to send traffic down multiple lines.
The *FB6202 LNS can do this on multiple L2TP links with per link speed controls. Bonding downlink does not need anything at the receiving end as the packets simply arrive on one of multiple simple broadband routers and go on to the LAN.
It is obviously a good idea to use an FB2900 at the customer end to also bond uplink though, because bonding works on a per-packet basis, even a single session can make use of multiple lines.
However, it is worth bearing in mind that whilst IP does not guarantee packet order, most TCP stacks will have trouble if you bond more than about 4 lines or if the lines are very different latency.
To use bonding you need to set the speed attribute (or use graph to link to a shaper with a speed setting) within the fb105 or ppp objects. For L2TP you can set the speed using RADIUS. You then simply arrange routing of the same IP addresses down multiple links that are the same localpref and the traffic will be per-packet bonded.
*The FB6000 series is due to be replaced in Q1 or Q2 of 2021, by the FB9000 series. A few improvements include a more powerful CPU, more RAM & Ten SFP ports (Two at 10Gbit/s & eight at 1Gbit/s). Please contact us or periodically check our News page for updates on features & availability of the new FB9000 series.
Why not MLPPP?
MLPPP (Multilink PPP) is a way to bond multiple ISDN lines together, or, in theory, any PPP links, so could apply to L2TP and PPP links. It works by splitting each packet into two or more parts and sending them in parallel over multiple lines. However, it is not designed for DSL lines and so there are no plans to support MLPPP within the FireBrick.
- The main reason for splitting packets up was to make the time for a single packet shorter. On a 64K ISDN line the time per packet was much more significant, but on DSL lines is not an issue.
- MLPPP assumes the lines are the same speed and latency like ISDN lines. DSL lines can often be different speeds and latency which does not work well.
- The way MLPPP works you are limited to multiples of the slowest line. With per-packet bonding, you are not.
- The customer end needs expensive routers that handle multiple lines at once. This limits how many lines you can use. With per-packet bonding, this can be done with normal inexpensive routers.
- MLPPP does not handle changing the number of lines very easily (e.g. fall-back on line failure) whereas per-packet bonding does.