Can LLMs stack more layers than the largest ones currently have, or is it bottlenecked? Is it because the gradients can’t propagate properly to the beginning of the network? Because inference would be to slow?
If anyone could provide a paper that talks about layer stacking scaling I would love to read it!
They definitely can go deeper - with skip connections and normalization you can propagate gradients through any depth of architecture.
Adding more layers isn’t free though, it requires more parameters and thus more compute. There’s an optimal depth-to-width ratio for a given parameter count.
Exactly the answer I was looking for, thank you!
It becomes very expensive compute-wise, but where we are actually running up to the edge is the scale of the data. They’ve discovered “scaling laws” (see chinchilla paper) that determines how big your model should be given the amount of data you have. We could go bigger but there’s no reason to use a multi-trillion parameter model for example because it’s just wasted capacity.