Discussion:
MSS clamping in NPF
(too old to reply)
Egerváry Gergely
2017-01-15 09:12:16 UTC
Permalink
I'm working on IPFilter to NPF transition on our site.
Unfortunately, it seems I cannot implement our firewall with NPF :-(

The biggest problem is MSS clamping. I do need it, because Path MTU
Discovery is broken on the ~20 percent of the Internet. Users cannot
browse their favorite websites without MSS clamping.

Imagine my firewall server: it has 12 network interfaces, and it can
reach about ~60 local IP prefixes over them. For example, there are 39
different subnets reachable via a single interface "vlan10".

Physically, IP traffic can flow from any to any - so they are
controlled by firewall rules. We have dual-stacked IPv4 and IPv6 - so,
we have a total of about 500 firewall rules with IPFilter now.

That's a LOT, but they are very simple and easy to maintain.
For example, my office has an IPv4 prefix: 172.28.1.0/24 and IPv6
prefix: 2001:738:7a00:1::/64. My PC's are allowed to connect to
anywhere, so there are two filter rules for them (in NPF syntax)

pass stateful in final family inet4 from 172.28.1.0/24
pass stateful in final family inet6 from 2001:738:7a00:1::/64

(outgoing packets are always passed in this scenario)

These packets can go to...
- any other local networks with MTU 1500 (no MSS adjustment needed)
- to the Internet over a GRE tunnel with MTU 1468
- to the Internet over a backup GRE tunnel with MTU 1476
- directly to our ISP over a PPPoE trunk with MTU 1492
- to our remote site over a VPN tunnel with MTU 1438

This looks complicated, but it's very easy in the real life.
For example, in PF:

scrub on pppoe0 inet max-mss 1452
scrub on pppoe0 inet6 max-mss 1432
scrub on gre0 inet max-mss 1428
scrub on gre0 inet6 max-mss 1408
scrub on gre1 inet max-mss 1436
scrub on gre1 inet6 max-mss 1416
scrub on tun0 inet max-mss 1398
scrub on tun0 inet6 max-mss 1378

And that's all! MTU and MSS has nothing to do with filter rules.
They are related to the specific interface.

In IPFilter, MSS clamping is implemented in the NAT code:

# IPv4: NAT + MSS adjustment
map gre0 172.28.0.0/16 -> 193.225.174.1/32 mssclamp 1428

# IPv6: no NAT, only MSS adjustment
map gre0 0/0 -> ::0/0 mssclamp 1408 tcp

That is not as nice as in PF, but it's maintainable - there are only
2-3 NAT rules for a specific interface.

Solutions? Probably, there are many:

#1) Implement `scrub' as in PF
(Normalization will be independent from filter rules)

#2) Implement `match` as in newer PF releases
(Normalization is done with special filter rules that does not stop
processing other filter rules)

... or, any better ideas?

Thanks,
--
Gergely EGERVARY

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Joerg Sonnenberger
2017-01-15 12:51:25 UTC
Permalink
Post by Egerváry Gergely
The biggest problem is MSS clamping. I do need it, because Path MTU
Discovery is broken on the ~20 percent of the Internet. Users cannot
browse their favorite websites without MSS clamping.
procedure "norm" {
normalize: "max-mss" 1432
}

group default {
pass out final on pppoe0 family inet4 all apply "norm"
}

You shouldn't need MSS clamping for IPV6 ever -- any network admin that
breaks IPv6 ICMP enough to inhibit Path MTU discovery should be fired
immediately and likely has much bigger problems already anyway.

Joerg

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Egerváry Gergely
2017-01-15 13:02:50 UTC
Permalink
Post by Joerg Sonnenberger
procedure "norm" {
normalize: "max-mss" 1432
}
group default {
pass out final on pppoe0 family inet4 all apply "norm"
}
Correct me if I'm wrong, but I think a packet can only match on
a single rule, so this one will never match if the packet matches
on an other rule before.

The problem is I have circa 500 filter rules. I can't apply "norm"
on all rules that can ever pass a packet on pppoe0.
Post by Joerg Sonnenberger
You shouldn't need MSS clamping for IPV6 ever -- any network admin
that breaks IPv6 ICMP enough to inhibit Path MTU discovery should be
fired immediately and likely has much bigger problems already anyway.
Agreed.

--
Gergely EGERVARY

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Egerváry Gergely
2017-01-15 13:10:11 UTC
Permalink
Post by Egerváry Gergely
The problem is I have circa 500 filter rules. I can't apply "norm"
on all rules that can ever pass a packet on pppoe0.
... or, I have to rework all of my rules: do all of the filtering on
packet ingress, and do _only_ the normalization on the egress.

It sounds easy with a few rules, but can be hard with 12 interfaces,
about 60 subnets, etc.
--
Gergely EGERVARY


--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Mindaugas Rasiukevicius
2017-01-15 13:56:31 UTC
Permalink
Post by Egerváry Gergely
...
And that's all! MTU and MSS has nothing to do with filter rules.
They are related to the specific interface.
I agree that that MSS clamping would often be per interface.
Post by Egerváry Gergely
#1) Implement `scrub' as in PF
(Normalization will be independent from filter rules)
#2) Implement `match` as in newer PF releases
(Normalization is done with special filter rules that does not stop
processing other filter rules)
... or, any better ideas?
I am thinking to implement the equivalent of "match" functionality.
Internally, a group is just a rule, so it could have rule procedures
associated as well.
--
Mindaugas

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Loading...