Discussion:
squid on 9.2: No buffer space available
(too old to reply)
Stephen Borrill
2022-02-11 09:54:04 UTC
Permalink
I'm running squid 4.16 on NetBSD 9.2_STABLE.

Under heavy use, I see frequent pairs of log messages along the lines of:

comm_udp_sendto FD 22, (family=2) 127.0.0.1:53: (55) No buffer space available
idnsSendQuery FD 22: sendto: (55) No buffer space available


# sysctl kern.mbuf.nmbclusters
kern.mbuf.nmbclusters = 1038373

# netstat -m
18381 mbufs in use:
5746 mbufs allocated to data
12631 mbufs allocated to packet headers
4 mbufs allocated to socket names and addresses
0 calls to protocol drain routines

# sysctl net.inet.udp
net.inet.udp.checksum = 1
net.inet.udp.sendspace = 9216
net.inet.udp.recvspace = 41600
net.inet.udp.do_loopback_cksum = 0

I tried increasing net.inet.udp.sendspace to 32768, but it hasn't made any
difference. Any ideas of other tweaks?

As an aside, I have seen similar messages when trying to do a UDP
bandwidth test with netio.
--
Stephen

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Michael van Elst
2022-02-11 10:50:32 UTC
Permalink
Post by Stephen Borrill
As an aside, I have seen similar messages when trying to do a UDP
bandwidth test with netio.
That's regular BSD behaviour. When you send packets faster than
you can emit on the wire, buffers of any size run full and you
get ENOBUFS.

On other systems, such packets may be silently dropped (they could
be dropped silently on the network path anyway).

The correct way to handle this is to rate limit UDP packets in the
application or even to implement some kind of flow control in the
application.

Squid implements 'delay pools' for rate limiting, I'm not sure
if that also applies to the UDP traffic between caches.

https://wiki.squid-cache.org/Features/DelayPools

Another way might be to move away from ICP (UDP based) to other
cache protocols.

Squid can also do logging via UDP, the configuration there seems
to have its own "buffer-size" option.


--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Stephen Borrill
2022-02-11 11:23:26 UTC
Permalink
Post by Michael van Elst
Post by Stephen Borrill
As an aside, I have seen similar messages when trying to do a UDP
bandwidth test with netio.
That's regular BSD behaviour. When you send packets faster than
you can emit on the wire, buffers of any size run full and you
get ENOBUFS.
On other systems, such packets may be silently dropped (they could
be dropped silently on the network path anyway).
The correct way to handle this is to rate limit UDP packets in the
application or even to implement some kind of flow control in the
application.
Squid implements 'delay pools' for rate limiting, I'm not sure
if that also applies to the UDP traffic between caches.
https://wiki.squid-cache.org/Features/DelayPools
Another way might be to move away from ICP (UDP based) to other
cache protocols.
Squid can also do logging via UDP, the configuration there seems
to have its own "buffer-size" option.
This isn't to do with neighbour caches, the messages suggest it is DNS:

comm_udp_sendto FD 22, (family=2) 127.0.0.1:53: (55) No buffer space available
idnsSendQuery FD 22: sendto: (55) No buffer space available

I think this is consistent with the symptoms I had reported to me (in a
rather vague way) as I could not get an established connection (e.g.
playing a YouTube video) to go wrong, but users were reporting
"intermittent internet".
--
Stephen


--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Loading...