One of my initial kernel projects was Dallas 1-wire: https://www.kernel.org/doc/Documentation/w1/
I’m still a maintainer, although quite occasional.
It was quite later that I began working with VFS and created Elliptics/PohmelFS, but there was a bit of hardware time.
In particular, I really liked to solder bits to my Matrox VGA card, which had w1 pins available. It used 1-wire bus for some internal control, but since this bus is addressable, one can add any other devices and things will not explode.
Anyway, w1 bus is quite actively used and I created (at the same time as w1 itself, like 7-10 years ago?) netlink interface over kernel connector interface from userspace to kernel w1 bus.
Using netlink from userspace allows to search for devices and perform all other actions with noticebly lower latencies than working with sysfs files.
During time userspace example code was lost and recovered and lost again and again, but apparently people do want to use it to monitor w1 bus from userspace.
So I created a github page with that example code: https://github.com/bioothod/w1
$ cat /proc/buddyinfo
Node 0, zone DMA 0 1 1 1 1 1 1 0 1 1 3
Node 0, zone DMA32 416 420 367 372 330 300 251 223 175 139 304
Node 0, zone Normal 12587 0 0 0 0 0 0 0 0 0 1
Node 1, zone Normal 5789 48426 53853 17277 5661 1619 147 6152 5336 2296 1M
This is a fun situation, where virtually all memory on node0 is used and what’s free is totally fragmented, while all memory on the node1 is not used at all.
This likely happens because of CPU affinity of the processes which do heavy IO. Actually neither process has non-deafult affinity mask, which is -1 here (0xffffffff).
Load average is small – just 1-2, which means that only 1-2 processes are running at any single moment. And likely linux scheduler puts them always to the cores of the node0’s processor, so they only suck memory from node0 numa region, which leads to this heavy fragmentation.
This is 3.2.2-1 ubuntu kernel, and I do not really know what to do with it :)