iantimmis@alien.topBtoMachine Learning@academy.garden•Are LLMs at a practical limit for layer stacking? [D]English
1·
10 months agoIt becomes very expensive compute-wise, but where we are actually running up to the edge is the scale of the data. They’ve discovered “scaling laws” (see chinchilla paper) that determines how big your model should be given the amount of data you have. We could go bigger but there’s no reason to use a multi-trillion parameter model for example because it’s just wasted capacity.
Damn this post resonates so hard.