Often when I read ML papers the authors compare their results against a benchmark (e.g. using RMSE, accuracy, …) and say “our results improved with our new method by X%”. Nobody makes a significance test if the new method Y outperforms benchmark Z. Is there a reason why? Especially when you break your results down e.g. to the anaylsis of certain classes in object classification this seems important for me. Or do I overlook something?
I review for AAAI, NeuRIPS, etc. If a paper doesn’t report some notion of variance, standard deviation, etc., I have no choice but to reject since it’s impossible to tell whether the proposed approach is actually better. In the rebuttals, the author’s response is typically “well, everyone else also does it this way”. Ideally, I’d like to see an actual test of statistical significance.
I think op is refering to hypothesis tests between baseline. What’s the point in reporting variance and standard deviation? My outputs on regression tasks are always non-normal. I tend to always plot the cumulative frequency but assigning a number to the distribution such as the variance will have very little meaning.