Things we like and don’t. Break something because we believe it to be overall beneit.
There are always some conditions we like and we don’t. Like good weather and rain, clean and dirty streets… benchmarks which show we are wrong and which do not.
There are times when we have made changes which we knew full well reduced dbench’s throughput, because we believed them to be of overall benefit.
Theorists do not and can not accept, that they can be wrong. And if practice shows they are, this is just a time for the new theory. This is a world in pink glasses. Without them though each test is created just to show that there are problems. One can infinitely argue that it can not happen in a real life, but test itself already is a real life, and thus there may be other people who use similar techniques. They should not, but they can. And in pink glasses we do not see such people and we can break something just because we believed them to be of overall benefit. We believe in theory and do not accept practical tests which show we are wrong.
Overall this is definitely a direction we should go. Absolutely. I want those who bit Andrew Morton also bite me. I want pink glasses, although no, I already wear ones… Although in a pink world anyone has perfect vision to things they believe to be overall benefit.
That’s why tbench performance degradation reported multiple times just dissapeared from the regression list. Only after David Miller showed that high-resolution timers made spark
wake_up() to be noticebly slower, they were disabled in scheduler. And this was actually a part of the dbench tests. And in my tests I showed two weeks ago hrticks only earned 80 out of 120 MB/s loss… I would not be surprised if high-resolution timers affect other timer-sensitive operations like BH context ‘generation’,
P.S. I want to be an idealist, unfortunately transform into cynic.