Discussion:
Seeking opinion on where networking-related user space code should go
(too old to reply)
Beverly Schwartz
2013-04-16 22:55:46 UTC
Permalink
I have created a facility in the kernel for tracking mbuf clusters. (Twice at BBN, we have successfully used this cluster tracking code to find mbuf cluster leaks.)

This facility can be compiled in the kernel by enabling the option MCL_DEBUG.

If MCL_DEBUG is enabled, then tracking data is kept for each mbuf cluster. Examples of data kept:
When and where in the code the cluster was allocated.
When and where in the code the cluster was freed.
When and where the cluster was queued or dequeued.
When and where the cluster was passed from one protocol to another.

At each of these points, I note which CPU we're on, the LWP id, whether or not we have KERNEL_LOCK and/or softnet lock, if there was something anomalous. Anomalies detected: cluster allocated twice without being freed in-between, cluster freed without being allocated, cluster unallocated when expected to be allocated, lock not held when expected to be held.

I have set up code in /proc for accessing the data, but it would be nice to have a user space program to look at the data. Using kvm, this data can be inspected in a live kernel or in a core dump.

Keep in mind, there can be up to 8192 clusters, so there is potentially a *lot* of data. Using kvm, we can also follow pointers in the data to inspect the contents of mbuf's and mbuf clusters. I expect that this, too, could be quite useful.

Options I have considered:
- a new usr.bin program
- adding new options to vmstat for this data
- adding new options to netstat for this data

Any thoughts or preferences?

-Bev
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Matt Thomas
2013-04-16 23:28:09 UTC
Permalink
Post by Beverly Schwartz
I have created a facility in the kernel for tracking mbuf clusters. (Twice at BBN, we have successfully used this cluster tracking code to find mbuf cluster leaks.)
This facility can be compiled in the kernel by enabling the option MCL_DEBUG.
When and where in the code the cluster was allocated.
When and where in the code the cluster was freed.
When and where the cluster was queued or dequeued.
When and where the cluster was passed from one protocol to another.
At each of these points, I note which CPU we're on, the LWP id, whether or not we have KERNEL_LOCK and/or softnet lock, if there was something anomalous. Anomalies detected: cluster allocated twice without being freed in-between, cluster freed without being allocated, cluster unallocated when expected to be allocated, lock not held when expected to be held.
I have set up code in /proc for accessing the data, but it would be nice to have a user space program to look at the data. Using kvm, this data can be inspected in a live kernel or in a core dump.
Keep in mind, there can be up to 8192 clusters, so there is potentially a *lot* of data. Using kvm, we can also follow pointers in the data to inspect the contents of mbuf's and mbuf clusters. I expect that this, too, could be quite useful.
- a new usr.bin program
- adding new options to vmstat for this data
- adding new options to netstat for this data
Any thoughts or preferences?
sysctl kern.mbuf….

seems like netstat -m should display it if available.
maybe -m -v
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Beverly Schwartz
2013-04-16 23:38:45 UTC
Permalink
we use sysctl for getting the routing table and pcblists.
Could you point me to an example where this is done? Thanks.

-Bev

--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Beverly Schwartz
2013-04-16 23:35:31 UTC
Permalink
Post by Matt Thomas
Post by Beverly Schwartz
Keep in mind, there can be up to 8192 clusters, so there is potentially a *lot* of data. Using kvm, we can also follow pointers in the data to inspect the contents of mbuf's and mbuf clusters. I expect that this, too, could be quite useful.
sysctl kern.mbuf….
seems like netstat -m should display it if available.
maybe -m -v
sysctl isn't designed to dump huge amounts of data. I can imagine it would be very annoying if when doing sysctl kern.mbuf, a deluge of data comes along. I would be producing more data than what sysctl -a produces several times over.

netstat -m -v
sounds like a good option.

-Bev
--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Matt Thomas
2013-04-16 23:36:25 UTC
Permalink
Post by Beverly Schwartz
Post by Matt Thomas
Post by Beverly Schwartz
Keep in mind, there can be up to 8192 clusters, so there is potentially a *lot* of data. Using kvm, we can also follow pointers in the data to inspect the contents of mbuf's and mbuf clusters. I expect that this, too, could be quite useful.
sysctl kern.mbuf….
seems like netstat -m should display it if available.
maybe -m -v
sysctl isn't designed to dump huge amounts of data. I can imagine it would be very annoying if when doing sysctl kern.mbuf, a deluge of data comes along. I would be producing more data than what sysctl -a produces several times over.
netstat -m -v
sounds like a good option.
we use sysctl for getting the routing table and pcblists.


--
Posted automagically by a mail2news gateway at muc.de e.V.
Please direct questions, flames, donations, etc. to news-***@muc.de
Loading...