Often when I read ML papers the authors compare their results against a benchmark (e.g. using RMSE, accuracy, …) and say “our results improved with our new method by X%”. Nobody makes a significance test if the new method Y outperforms benchmark Z. Is there a reason why? Especially when you break your results down e.g. to the anaylsis of certain classes in object classification this seems important for me. Or do I overlook something?

  • iswedlvera@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This is the reason. People do significance tests when you want to draw conclusions with 20 samples on an entire population. If you have thousands of samples there won’t be much point.

      • iswedlvera@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I see what you mean. Yeah it shouldn’t be by default I don’t do statistical significance tests.