• DaBIGmeow888@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      not much, garbage (quickly) in, garbage (quickly) out, the training data set and novel techniques matter more than how quickly it’s processed after a certain point.

    • RollingTater@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I work in the field and I don’t know what the others here are saying. In practice the difference in performance doesn’t matter, we literally just launch a bunch of jobs on the cluster, and if they finish faster it doesn’t matter because I either haven’t checked my jobs, or I’m not ready to collect the results yet, or I’m just dicking around with some experiments or different parameters so the actual speed of completion doesn’t matter. A bunch of weaker GPUs can do the same task as a stronger one, only memory really matters. Doubly true if the company is big enough that power consumption is a drop in the bucket in terms of operational costs.

      What actually matters is the overall workflow, stuff like the cluster having downtime is way more impactful to my work than the performance of a GPU, or the ease of designing and scheduling jobs/experiments on the cluster.

      Also in the end all of this is moot, this type of training is probably the wrong approach to AI. Note that a child does not need a million images of a ball to recognize a ball, and it would instantly be able to recognize soccer balls, basket balls, etc as all balls after learning one. The way we train our AIs cannot do this, our current approach to AI is just brute force.

    • ResponsibleJudge3172@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      When you are able to choose between 1 rack of latest Nvidia products vs 3 racks of last gen to get results in the same time frame, you find that the performance matters a lot