Often when I read ML papers the authors compare their results against a benchmark (e.g. using RMSE, accuracy, …) and say “our results improved with our new method by X%”. Nobody makes a significance test if the new method Y outperforms benchmark Z. Is there a reason why? Especially when you break your results down e.g. to the anaylsis of certain classes in object classification this seems important for me. Or do I overlook something?

  • Ambiwlans@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Statistical significance doesn’t make sense when nothing is stochastic. … They test the entirety of the benchmark.

  • SciGuy42@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I review for AAAI, NeuRIPS, etc. If a paper doesn’t report some notion of variance, standard deviation, etc., I have no choice but to reject since it’s impossible to tell whether the proposed approach is actually better. In the rebuttals, the author’s response is typically “well, everyone else also does it this way”. Ideally, I’d like to see an actual test of statistical significance.

    • iswedlvera@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I think op is refering to hypothesis tests between baseline. What’s the point in reporting variance and standard deviation? My outputs on regression tasks are always non-normal. I tend to always plot the cumulative frequency but assigning a number to the distribution such as the variance will have very little meaning.

  • devl82@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    They don’t even report if ANY kind of statistically sane validation method is used when selecting model parameters (usually a single number is reported) and you expect rigorous statistical significance testing? That.is.bold.

  • neo_255_0_0@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    You’d be surprised to know that most academia is so focused on publishing that the rigor is not even a priority. Forget reproducibility. This is in part because repeated experiments would indeed require more time and resources both of which are a constraint.

    That is why the most good tech which can be validated is produced by the industry.

  • bikeranz@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    In part, because it can be prohibitively expensive to generate those results. And then also laziness. I used to go for a minimum of 3 random starts, until I was told to stop wasting resources in our cluster.

  • colintbowers@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It depends what book/paper you pick up. Anyone who comes at it from a probabilistic background is more likely to discuss statistical significance. For example, the Hastie, Tibshirani, and Friedman textbook discusses it in detail, and they consider it in many of their examples, e.g. the neural net chapter uses boxplots in all the examples

  • GullibleEngineer4@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Isn’t cross validation (for prediction tasks) an alternative to and I daresay even better than statistical significance tests?

    I am referring to the seminal paper of Statistical Modeling: The Two Cultures by Leo Breiman if someone wants to know where am I coming from.

    Paper: https://www.jstor.org/stable/2676681

    • Brudaks@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Cross-validation is a reasonable alternative, however, it does increase your compute cost 5-10 times, or, more likely, means that you generate 5-10 times smaller model(s) which are worse than you could have made if you’d just made a single one.

  • AwarenessPlayful7384@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Cuz each experiment is too expensive so sometimes it just doesn’t make sense to do that. Imagine training a large model on a huge dataset several times in order to have a numerical mean and variance that dont mean much.

  • longgamma@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If your training data is sufficiently large then isn’t any improvement in a metric statistically significant ?

  • gunshoes@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Most ML work is oriented towards industry, which in turns oriented towards customers. Most users don’t understand p values. But they do understand one number being bigger than another. Add on that checking p values would probably remove 75% of research publications, there’s just no incentive.

  • AltruisticCoder@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    They should but many don’t because often their results are not statistically significant or they have to spend a ton of compute to only show very small statistically significant improvements. So, they’ll just put 5 run averages (sometimes even less) and hope for the best. I have been a reviewer on most of the top ML conferences and I’m usually the only reviewer holding people accountable on statistical significance of results when confidence intervals are missing.

  • me_but_darker@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Without going into some of the fallacies that people posted in the tread, I’ll share some basic strategies I personally use to validate my work:

    • Bootstrap sampling to train and test the model.
    • Modifying the random seed.
    • using inferential statistics ( if you’re a fan of frequentist statistics then CI or ROPE if you are a fan of Bayesian)

    I repeat the experiment at least 30 times (using small datasets), draw a distribution and analyze the results.

    This is very basic, easy and if someone complains about compute, it can be automated to run overnight on commodity hardware or using a smaller dataset or building a simple benchmark and comparing performance.

    As to OP’s question, I personally feel that ML is more focused on optimizing metrics to achieve a goal, and less focused on inferential analysis or feasibility of results. As an example, I see a majority of kaggle notebooks using logistic regression without checking for its assumptions.

  • srpulga@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    In the industry, cross-validation is a good measure of the model’s utility, which is what matters in the end. But I agree that academia definitely should report some measure of uncertainty particularly in benchmarks.